CN115131470A - Image-text material synthesis method and device, electronic equipment and storage medium - Google Patents

Image-text material synthesis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115131470A
CN115131470A CN202210534239.1A CN202210534239A CN115131470A CN 115131470 A CN115131470 A CN 115131470A CN 202210534239 A CN202210534239 A CN 202210534239A CN 115131470 A CN115131470 A CN 115131470A
Authority
CN
China
Prior art keywords
resource file
entity
target
rendering
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210534239.1A
Other languages
Chinese (zh)
Inventor
梁宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210534239.1A priority Critical patent/CN115131470A/en
Publication of CN115131470A publication Critical patent/CN115131470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of computers, in particular to a method and a device for synthesizing image-text materials, electronic equipment and a storage medium, which are used for improving the efficiency of synthesizing the image-text materials. The method comprises the following steps: creating an entity corresponding to a resource file according to a pre-configured resource file, wherein the resource file comprises a decoration resource file and a picture resource file to be synthesized; obtaining the target drawing attribute of the entity according to the rendering configuration information of the resource file; drawing corresponding picture entities and character entities in the rendering canvas according to the resource files and the target drawing attributes; and synthesizing the target image-text material based on the image entity and the text entity. According to the method and the device, the picture entity and the character entity are drawn in the rendering canvas based on the pre-configured resource file, and the target image-text material is automatically synthesized based on the drawn picture entity and the drawn character entity, so that manual adjustment is not needed, and the image-text material synthesis efficiency is improved.

Description

Image-text material synthesis method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for synthesizing image-text materials, an electronic device, and a storage medium.
Background
With the development of intelligent terminals and audio and video technologies, objects can browse, make and share multimedia resources such as pictures and videos through the intelligent terminals, and in the making process, more and more objects enrich the contents of the pictures or videos by synthesizing picture and text materials.
In the related technology, the composite image-text material is mainly obtained by selecting a proper material from a decoration material and a base image material provided by editing software through an object, then manually adjusting the position, the size and the like of the selected material, and finally obtaining a composite image or video according to the material selected by the object and the adjustment of the material. However, when combining the image-text materials in the above-described manner, it is generally necessary to select and adjust the objects several times in order to achieve a good effect, and it takes a long time.
Therefore, how to improve the efficiency of synthesizing the image-text materials is a problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the application provides a method and a device for synthesizing image-text materials, electronic equipment and a storage medium, which are used for improving the efficiency of synthesizing the image-text materials.
The image-text material synthesis method provided by the embodiment of the application comprises the following steps:
creating an entity corresponding to the resource file according to a pre-configured resource file, wherein the resource file comprises a decoration resource file and a picture resource file to be synthesized;
obtaining the target drawing attribute of the entity according to the rendering configuration information of the resource file;
drawing corresponding picture entities and character entities in a rendering canvas according to the resource files and the target drawing attributes;
and synthesizing a target image-text material based on the image entity and the text entity.
The image-text material synthesizing device provided by the embodiment of the application comprises:
the system comprises a creating unit, a processing unit and a processing unit, wherein the creating unit is used for creating an entity corresponding to a resource file according to a pre-configured resource file, and the resource file comprises a decoration resource file and a picture resource file to be synthesized;
the acquisition unit is used for acquiring the target drawing attribute of the entity according to the rendering configuration information of the resource file;
the drawing unit is used for drawing corresponding picture entities and character entities in a rendering canvas according to the resource files and the target drawing attributes;
and the synthesis unit is used for synthesizing the target image-text material based on the image entity and the text entity.
The apparatus further comprises a compiling unit configured to:
compiling the resource file into a target byte code format;
the obtaining unit is specifically configured to:
in a target bytecode operation environment, obtaining a target drawing attribute of the entity according to the rendering configuration information of the resource file;
the drawing unit is specifically configured to:
and drawing corresponding picture entities and character entities in a rendering canvas according to the resource files in the target byte code format and the target drawing attributes.
Optionally, the creating unit is specifically configured to:
in an initial code running environment, analyzing pre-configured template configuration information to obtain first address information of the resource file, and downloading the resource file according to the first address information, wherein the resource file is in an initial code format;
and traversing the resource file and creating an entity corresponding to the resource file.
Optionally, the apparatus further includes a saving unit:
saving the resource file to a virtual file system, and generating storage information of the resource file;
and generating rendering configuration information of the resource file according to the storage information and pre-configured rendering information, and storing pre-configured template configuration information and the rendering configuration information to the virtual file system.
Optionally, the target rendering attribute at least includes a position and a size of the entity; the obtaining unit is specifically configured to:
in the target bytecode operation environment, acquiring second address information of the resource file in the virtual file system;
and acquiring the corresponding rendering configuration information in the virtual file system according to the second address information, and acquiring the position and the size of the entity according to the rendering configuration information.
Optionally, the apparatus further comprises a display unit, configured to:
and storing the target image-text material to a virtual file system, and creating a display link so as to display the target image-text material according to the display link.
Optionally, the storage information includes index information and length information of the resource file in the virtual file system.
Optionally, the apparatus further includes a first invoking unit, configured to:
in an initial code running environment, calling an initialization canvas function and binding a target graphic library interface so that the rendering canvas calls a network graphic library WebGL in a target bytecode running environment;
the drawing unit is specifically configured to:
and drawing a corresponding picture entity by calling WebGL in the rendering canvas.
Optionally, the apparatus further includes a second invoking unit, configured to:
and calling a rendering entry function in the initial code operating environment so as to draw corresponding picture entities and text entities in the target bytecode operating environment based on the rendering entry function.
Optionally:
the client establishes an entity corresponding to the resource file according to the pre-configured resource file;
the client acquires the target drawing attribute of the entity according to the rendering configuration information of the resource file in the target byte code format;
the client side draws corresponding picture entities and character entities in a rendering canvas according to the target drawing attributes;
and the client synthesizes a target image-text material based on the image entity and the text entity.
An electronic device provided by an embodiment of the present application includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to execute any one of the steps of the image-text material synthesizing method.
An embodiment of the present application provides a computer-readable storage medium, which includes a computer program, and when the computer program runs on an electronic device, the computer program is configured to enable the electronic device to execute any one of the steps of the above-mentioned image-text material synthesis method.
An embodiment of the present application provides a computer program product, which includes a computer program, the computer program being stored in a computer-readable storage medium; when the processor of the electronic device reads the computer program from the computer-readable storage medium, the processor executes the computer program, so that the electronic device executes the steps of any one of the above-mentioned image-text material synthesizing methods.
The beneficial effects of this application are as follows:
the embodiment of the application provides a method and a device for synthesizing image-text materials, electronic equipment and a storage medium, because resource files can be configured in advance, when image-text materials are required to be synthesized, an entity corresponding to the resource files can be directly created according to the pre-configured resource files, then a target drawing attribute of the entity is obtained according to rendering configuration information of the resource files, finally, corresponding picture entities and text entities are drawn in rendering canvas according to the resource files and the target drawing attribute, the target material images and texts are automatically synthesized based on the picture entities and the text entities, image-text material synthesis is carried out based on the mode, the picture entities and the text entities are drawn based on the pre-configured resource files and automatically synthesized, manual adjustment is not needed, and the material synthesis efficiency is effectively improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is an alternative schematic diagram of an application scenario in an embodiment of the present application;
fig. 2 is a flowchart of an implementation of a method for synthesizing image-text materials according to an embodiment of the present application;
fig. 3 is a schematic diagram of a target teletext material according to an embodiment of the application;
fig. 4 is a flowchart of another method for synthesizing image-text materials in the embodiment of the present application;
FIG. 5 is a flow chart illustrating a compilation of a target bytecode according to an embodiment of the present application;
fig. 6 is a schematic view of a read-write flow of a picture resource file in an embodiment of the present application;
FIG. 7 is a diagram illustrating a version relationship between graphic libraries in an embodiment of the present application;
FIG. 8 is a block diagram of a composition rendering engine in an embodiment of the present application;
fig. 9 is a schematic flowchart of a method for synthesizing image-text materials in an embodiment of the present application;
FIG. 10 is a diagram illustrating a relationship between a code and a platform according to an embodiment of the present application;
fig. 11 is a schematic diagram of another target teletext material in an embodiment of the application;
FIG. 12 is a schematic view of a base view in an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image-text material synthesizing apparatus in an embodiment of the present application;
fig. 14 is a schematic diagram of a hardware component structure of an electronic device to which an embodiment of the present application is applied;
fig. 15 is a schematic diagram of a hardware component structure of another electronic device to which the embodiment of the present application is applied.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present application. All other embodiments obtained by a person skilled in the art without any inventive step based on the embodiments described in the present application are within the scope of the protection of the present application.
Some concepts related to the embodiments of the present application are described below.
The World Wide Web (Web), also known as the World Wide Web, is a HyperText Transfer Protocol (HTTP) -based, global, dynamic interactive, cross-platform distributed graphical information system. The network service is built on the Internet, provides a graphical and easily accessible visual interface for a browser to search and browse information on the network, and can provide the characteristic of integrating graphical, audio and video information. Wherein the access to the web address is realized by a browser.
The browser: is an application program used to retrieve, present and transfer Web information resources marked by uniform resource identifiers, which is a Web page, a picture, a video or any content presented on the Web, and a user can browse the information associated with each other through a browser by means of hyperlinks.
Open Graphics Library (Open Graphics Library, OpenGL): is a cross-language, cross-platform application programming interface for rendering 2D, 3D vector graphics, consisting of nearly 350 different function calls to draw from simple graphics bits to complex three-dimensional scenes. OpenGL is commonly used in CAD, virtual reality, scientific visualization programs, and electronic game development.
Web Graphics Library (Web Graphics Library, WebGL): the 3D drawing protocol is a 3D drawing protocol, the drawing technical standard allows JavaScript and OpenGL ES 2.0 to be combined together, and by adding one JavaScript binding of OpenGL ES 2.0, WebGL can provide hardware 3D accelerated rendering for HTML5 Canvas, so that a Web developer can more smoothly display 3D scenes and models in a browser by means of a system display card, and complicated navigation and data visualization can be created. The WebGL technical standard avoids the trouble of developing a webpage-dedicated rendering plug-in, does not need any browser plug-in support, and utilizes the bottom graphics hardware acceleration function to perform graphics rendering, and the graphics rendering is realized through a uniform, standard and cross-platform OpenGL interface.
Javascript (js): is a lightweight, interpreted or just-in-time programming language with function precedence. The JavaScript is a high-level scripting language belonging to the internet, has been widely used for Web application development, and is often used to add various dynamic functions to a Web page to provide a smoother and more beautiful browsing effect for a user. The initial code format in the embodiment of the application may refer to JavaScript, and the initial code execution environment is a JS execution environment.
WebAssembly: or wasm, is a low-level, portable bytecode format for client-side scripts within browsers, is a new type of code that runs in modern web browsers, and provides new performance characteristics and effects, efficient for browser downloads and loads. The target bytecode format in the embodiment of the application may refer to WebAssembly, and the target bytecode execution environment is a wasm execution environment.
Shader (Shader): the method is applied to the field of computer graphics, and refers to a group of instructions used by computer graphics resources in executing a rendering task, and the instructions are used for calculating the color or brightness of an image.
Rendering configuration information: the rendering attribute of the entity is automatically adjusted according to rendering configuration information during rendering, and the rendering configuration information can include information such as pictures, fonts, copy, sizes and the like.
And (3) target drawing attribute: the system is used for drawing the entity according to the target drawing attribute, wherein the target drawing attribute can comprise the attributes of the position, the size, the width, the height, the shadow and the like of the entity.
And (3) compiling: a process of generating an object program from a source program written in a source language using a compiler, or an action of generating an object program with a compiler. The compiler program translates a source program into a target program and has five stages: lexical analysis; analyzing the grammar; semantic inspection and intermediate application development; optimizing codes; and developing a target application program. The method mainly comprises the steps of performing lexical analysis and syntactic analysis, namely source program analysis, finding grammatical errors in the analysis process, and giving prompt information.
The following briefly introduces the design concept of the embodiments of the present application:
with the development of intelligent terminals and audio and video technologies, objects can browse, make and share multimedia resources such as pictures and videos through the intelligent terminals, and in the making process, more and more objects enrich the contents of the pictures or the videos by synthesizing picture and text materials.
In the related art, the composite image-text material is mainly synthesized by the following two ways:
the first method is as follows: the object selects proper materials from the decoration materials and the base map materials provided by the editing software, then the position, the size and the like of the selected materials are manually adjusted, and finally the synthesized picture or video is obtained according to the materials selected by the object and the adjustment of the materials. However, when combining the image-text materials in the first mode, it is generally necessary to select and adjust the objects several times in order to achieve a good effect, which takes a long time.
The second method comprises the following steps: and synthesizing the image-text materials through a system engine on the server. In order to simplify certain repeated work and improve the efficiency of generating a single template picture, a system engine for completing picture synthesis with high efficiency and high quality mainly processes on a server, the computing performance of the server is wasted, the efficiency is low, the image synthesis speed of the server is processed very slowly, and if the quantity is large, waiting queue is needed, so that the efficiency is low.
Therefore, how to improve the efficiency of synthesizing the image-text materials is a problem to be solved urgently at present.
In view of the above, embodiments of the present application provide a method, an apparatus, an electronic device, and a storage medium for synthesizing image-text materials, in which an entity corresponding to a resource file is created according to a preconfigured resource file, then a target rendering attribute of the entity is obtained according to rendering configuration information of the resource file, and finally, according to the resource file and the target rendering attribute, a corresponding image entity and a corresponding text entity are rendered in a rendering canvas, and a target image-text material is automatically synthesized based on the image entity and the text entity, and image-text material synthesis is performed based on the above manner, and the image entity and the text entity are rendered based on the preconfigured resource file and automatically synthesized without manual adjustment, thereby effectively improving material synthesis efficiency.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
As shown in fig. 1, it is a schematic view of an application scenario of the embodiment of the present application. The application scenario diagram includes two terminal devices 110 and a server 120.
In the embodiment of the present application, the terminal device 110 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a desktop computer, an electronic book reader, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, and other devices; the terminal device may be installed with a client related to the synthesis of the image-text material, where the client may be software (e.g., a browser, a cropping software, etc.), or a web page, an applet, etc., and the server 120 is a background server corresponding to the software, or the web page, the applet, etc., or a server specially used for synthesizing the image-text material, which is not limited in this application. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
It should be noted that the method for synthesizing the teletext material in the embodiment of the present application may be executed by an electronic device, which may be the server 120 or the terminal device 110, that is, the method may be executed by the server 120 or the terminal device 110 alone, or may be executed by both the server 120 and the terminal device 110. For example, when the server 120 and the terminal device 110 are executed together, the server 120 sends the preconfigured resource file to the terminal device 110, the terminal device 110 creates an entity corresponding to the resource file according to the preconfigured resource file, obtains a target drawing attribute of the entity according to the rendering configuration information of the resource file, finally draws a corresponding photo entity and a corresponding text entity in a rendering canvas according to the resource file and the target drawing attribute, and synthesizes a target image-text material based on the photo entity and the text entity.
In an alternative embodiment, terminal device 110 and server 120 may communicate via a communication network.
In an alternative embodiment, the communication network is a wired network or a wireless network.
It should be noted that fig. 1 is only an example, and the number of the terminal devices and the servers is not limited in practice, and is not specifically limited in the embodiment of the present application.
In the embodiment of the application, when the number of the servers is multiple, the multiple servers can be combined into a block chain, and the servers are nodes on the block chain; according to the method for synthesizing the image-text materials, the related resource files can be stored in the block chain.
In addition, the embodiment of the application can be applied to various scenes, including not only image-text material synthesis scenes, but also scenes such as cloud technology, artificial intelligence, intelligent traffic, driving assistance and the like. For example, when the embodiment of the application is applied to a driving assistance scene, mapping can be performed based on the image-text material synthesis method in the application.
The method for synthesizing the teletext material provided by the exemplary embodiment of the application is described below with reference to the drawings in combination with the application scenarios described above, and it should be noted that the application scenarios described above are only shown for the convenience of understanding the spirit and principles of the application, and the embodiment of the application is not limited in this respect.
Referring to fig. 2, an implementation flow chart of the image-text material synthesis method provided in the embodiment of the present application takes an execution subject as an example of a client, and the specific implementation flow of the method includes the following steps S21-S24:
s21: the client establishes an entity corresponding to the resource file according to the pre-configured resource file;
the resource files comprise decoration resource files to be synthesized and picture resource files, the decoration resource files comprise decoration materials which can be divided into image decoration materials and character decoration materials, and the picture resource files comprise basic materials. If one resource file can contain a plurality of decoration resource files and picture resource files, respectively corresponding entities are created for each decoration resource file and each picture resource file contained in the resource file; or one resource file comprises a decoration resource file or a picture resource file, and an entity corresponding to the resource file is created. The resource file can be pre-configured and stored in the server, and is issued to the client when the object (hereinafter referred to as object) for synthesizing the image-text material is used for synthesizing the image-text material, or the resource file selected by the object is issued to the client after the object is selected from the pre-configured resource file according to the requirement, or the resource file is generated by the image or the character input by the object for the subsequent synthesis of the image-text material. The picture resource file can be configured in advance, and can also be generated according to pictures or videos input by the object.
Fig. 3 is a schematic diagram of a target teletext material in the embodiment of the present application, in which a base map (i.e., a base material) includes an image of a boy, the base map is stored in a resource file 1, the image is used to decorate the material 1, and the text is used to decorate the materials 2 and 3 and is stored in a resource file 2. And inputting the base map and the selected template configuration information into the object 1, generating a resource file 1 by the client, and matching the resource file 1 with a resource file 2 for synthesizing the image-text material according to the resource file 1. The picture content can be effectively enriched by adding the decorative material on the basic material.
In addition, the method and the device can synthesize the picture and the decoration material, and can intelligently match the video uploaded by the object to generate the synchronous live-broadcast course video with the explanation file. Specifically, taking the video to be synthesized input by the object 2 as the language course video as an example, the client performs voice recognition on the language course video, obtains a document included in the language course video, generates a resource file 5 including the language course video and a corresponding document, matches the resource file 5 to a resource file 6 including a decoration resource file, and generates a synchronous live-broadcast course video with a comment document of the language course video according to an image decoration material and a character decoration material included in the resource file 6. The image-text material of the video is synthesized based on the mode, the image entity and the character entity are drawn based on the pre-configured resource file and are automatically synthesized, manual adjustment is not needed, the material synthesis efficiency is effectively improved, the visibility of the course video is improved, and the content of the course video is effectively enriched.
S22: the client acquires the target drawing attribute of the entity according to the rendering configuration information of the resource file;
specifically, before drawing the entity of the picture, target drawing attributes of the entity need to be determined, and the target drawing attributes at least include attributes such as width, height, size, shadow, position, and the like. The rendering configuration information of the resource file may include information such as a picture, a font, a pattern, a size, and the like, and the rendering attribute of the entity is adjusted according to the rendering configuration information of the resource file to obtain the target rendering attribute of the entity. The rendering configuration information may be pre-stored in resource files, each resource file includes corresponding rendering configuration information, and each resource file may also be generated according to a form configured by a background administrator, or may be input by an object, which is not specifically limited herein.
S23: the client side draws corresponding picture entities and character entities in the rendering canvas according to the resource files and the target drawing attributes;
specifically, the client traverses the created entity, and draws a text entity of the picture entity according to the resource file and the target drawing attribute, wherein the picture entity can be drawn through OpenGL, and the text entity can be drawn through Skia (2D vector graphics processing function library).
S24: and the client synthesizes the target image-text material based on the image entity and the text entity.
In the embodiment of the application, an entity corresponding to a resource file is created according to a preconfigured resource file, then a target drawing attribute of the entity is obtained according to rendering configuration information of the resource file, finally, a corresponding picture entity and a corresponding text entity are drawn in a rendering canvas according to the resource file and the target drawing attribute, a target image-text material is automatically synthesized based on the picture entity and the text entity, image-text material synthesis is performed based on the mode, the picture entity and the text entity are drawn based on the preconfigured resource file and are automatically synthesized, manual adjustment is not needed, and the material synthesis efficiency is effectively improved.
In an alternative embodiment, after step S21, further comprising compiling the resource file into the target bytecode format, steps S21-S24 may be implemented as: referring to fig. 4, it is a flowchart of another method for synthesizing image-text material in the embodiment of the present application, including the following steps:
s41: creating an entity corresponding to the resource file according to the pre-configured resource file, and compiling the resource file into a target byte code format;
s42: in a target bytecode operation environment, obtaining a target drawing attribute of an entity according to rendering configuration information of a resource file;
s43: drawing corresponding picture entities and character entities in a rendering canvas according to the resource files in the target byte code format and the target drawing attributes;
s44: and synthesizing the target image-text material based on the image entity and the text entity.
The target byte code format can be WebAssembly (wasm), the target byte code operating environment is wasm runtime, the resource file is compiled into the target byte code format, a special byte code between a high-level language and a machine code can be generated, the byte code is not directly associated with a platform and can operate on different platforms, a set of codes of a front end and a back end are set with logic, the back end is changed, the front end can be used after the resource file is compiled into the wasm, the front end does not need to consume time when the front end and the back end request, a browser is used as the computing resource of a client, and the front end and the back end both operate on the client, so that the computing resource is on the client side, the server cost is reduced, and the high availability of services is improved. And due to the characteristics of WebAssembly, the packaged wasm code has small volume, and the OpenGL specification used at the back end supports good Shader coloring effect.
In an alternative embodiment, the execution subject of steps S41-S44 is a client, and the method includes the following steps:
the client establishes an entity corresponding to the resource file according to the pre-configured resource file;
the client acquires the target drawing attribute of the entity according to the rendering configuration information of the resource file in the target byte code format;
the client side draws corresponding picture entities and character entities in the rendering canvas according to the target drawing attributes;
and the client synthesizes the target image-text material based on the image entity and the text entity.
In an alternative embodiment, step S21 may be implemented as:
firstly, in an initial code running environment, analyzing pre-configured template configuration information to obtain first address information of a resource file, and downloading the resource file according to the first address information, wherein the resource file is in an initial code format; and then traversing the resource file to create an entity corresponding to the resource file.
The template configuration information is downloaded from configuration items of the system, and is stored in the server after configuration, and the template configuration information used may be that all the stored template configuration information is sent to the client, or that the configuration information of the template selected by the object is sent to the client. Since the template configuration information is generally a C/C + + file, the template configuration information needs to be parsed, the initial code operating environment refers to a Javascript operating environment (i.e., js runtime), and the initial code format refers to a C/C + + file, a js file, and the like. The Javascript is responsible for pulling the template configuration information, downloading the resource file according to the file address (i.e., the first detailed information) provided by the template configuration information, and writing the resource file to the virtual file system. Wherein the virtual file system supports an initial code execution environment and a target bytecode execution environment.
Referring to fig. 5, which is a schematic diagram of a compiling flow of a target bytecode in an embodiment of the present application, after a resource file is compiled into a wasm and js auxiliary file through an emscript tool chain, the resource file can be executed in different wasm runtime environments, where the emscript is a tool chain, and a C/C + + file is compiled by LLVM to generate asm. Since the computer can only recognize and execute the machine code, the auxiliary files wasm and js need to be recognized and executed by the computer after being compiled into the machine code by the compiler v8 wasm engine, wasmer and the like.
In an alternative embodiment, after step S41 and before step S42, the following steps 1-2 may be further performed:
step 1: storing the resource file to a virtual file system, and generating storage information of the resource file;
step 2: and generating rendering configuration information of the resource file according to the storage information and the pre-configured rendering information, and storing the pre-configured template configuration information and the pre-configured rendering configuration information to a virtual file system.
Specifically, saving the resource file in the virtual file system may obtain the length of the write and the index of the file in memory, for example, saving resource file 3 in the virtual file system may obtain the length of the write as 256 characters, and the index is 1.32.47. The path of the resource file in the virtual file system is filled into the configuration, and the rendering configuration information is generated by combining the content (namely the rendering information) filled in the form, wherein the form refers to a background configuration form filled by an administrator (editing) object and used for generating the rendering configuration information.
In an alternative embodiment, the storage information includes index information and length information of the resource file in the virtual file system.
For example, after the resource file 4 is saved in the virtual file system, the index information of the resource file 4 is obtained as 13.34.251, and the length information is 128 characters, so that the resource file can be read in the virtual file system according to the index information of the resource file, and the reading efficiency is improved.
Referring to fig. 6, which is a schematic view of a read-write process of a picture resource file in the embodiment of the present application, taking the read-write of the picture 1 as an example, the read process is as follows:
(1) calling a file in a fs.read ("1. png"), C/C + + function to read a picture 1;
(2) emscript proxies to virtual file systems.
The descriptor agent plays a role in acquiring file content, namely acquiring the content of the picture 1, from the virtual file system, and can read the file content in the virtual file system through the associated index.
The writing process comprises the following steps:
(1) fetch ("1. png"). the (() { /) to array buffer and create Dataview/}), download picture 1 and create Dataview;
(2) fa. writefile ("/1. png", Dataview), writing picture 1 into the memory of the virtual file system through fs module of descriptor;
(3) the length of the picture 1 writing and the index in the memory can be obtained after writing.
In an alternative embodiment, the target rendering attributes include at least a location and a size of the entity; step S42 may be implemented as:
in a target byte code operating environment, acquiring second address information of a resource file in a virtual file system; and acquiring corresponding rendering configuration information in the virtual file system according to the second address information, and acquiring the position and the size of the entity according to the rendering configuration information.
The second address information of the resource file in the virtual file system may be a storage path of the resource file or an index of the resource file in the virtual file system, the corresponding resource file is read in the virtual file system according to the second address information, corresponding rendering configuration information is obtained, the rendering attribute of the entity is adjusted according to the rendering configuration information, and the target rendering attribute of the entity, that is, the position and the size of the entity, is obtained.
In an alternative embodiment, after step S24, the following steps may also be performed:
and storing the target image-text material to a virtual file system, and creating a display link to display the target image-text material according to the display link.
Specifically, after the target image-text material is synthesized, the target image-text material is stored in a memory of the virtual file system, a target picture (target image-text material) in the memory is read in an initial code running environment (js runtime), and a display link is created for displaying the target picture. The display link can be a blob link, blobs in computer vision refer to a connected region in an image, blob analysis is to extract and mark the connected region of a binary image after foreground/background separation, each blob after marking represents a foreground target, then some relevant features of the blobs can be calculated, and information of the relevant region can be obtained through blob extraction.
In an alternative embodiment, before step S43, the following steps may also be performed:
in the initial code running environment, calling an initialization canvas function and binding a target graphic library interface so that the rendering canvas calls a network graphic library WebGL in the target bytecode running environment;
step S44 may be implemented as:
and in the rendering canvas, drawing a corresponding picture entity by calling WebGL.
Specifically, wasm provides a function InitCanvas (initialization canvas function) to js call, and is responsible for binding canvas WebGL context, so that the rendering canvas can use WebGL technology and put in content. Graphics can be rendered on a browser using WebGL without downloading any plug-ins.
In an alternative embodiment, before step S43, the following steps may also be performed:
and calling a rendering entry function in the initial code running environment so as to draw corresponding picture entities and text entities in the target bytecode running environment based on the rendering entry function.
Specifically, wasm provides a Render entry function (Render) to call the js environment (initial code runtime environment) and is responsible for reading js-written rendering configuration information. js calls a renderer of the wasm through a call module of the Emscript, and draws corresponding picture entities and character entities in the wasm running environment based on the renderer.
In the embodiment of the application, the processing end is arranged on the same side by using the WebAssembly mode, the compatibility and the performance are improved, the drawing and fusion material diversification is realized, a material format of 100% is supported under the browser rendering mode, the computing resources of the client are used instead of a server, the service cost is reduced, the high availability of the service is improved, the size of the engine source code is small and the performance is improved based on the WebAssembly packaging.
Fig. 7 is a schematic diagram of a version relationship between graphics libraries in the embodiment of the present application, where OpenGL is a software interface for rendering 2D and 3D vector graphics hardware by calling a function, and supports graphics rendering on different platforms, OpenGL ES is a subset of OpenGL, and is a graphics development standard for embedded systems, and WebGL is a technology for rendering and rendering complex three-dimensional graphics on a web page and allowing a user to interact with the complex three-dimensional graphics. With the increasing capabilities of personal computers and browsers, more and more sophisticated and complex 3D graphics can be created on the Web. Conventionally, in order to display three-dimensional graphics, developers need to use C or C + + language, supplemented with specialized computer graphics libraries, such as OpenGL or Direct3D, to develop a stand-alone application. With WebGL, it is now possible to display three-dimensional graphics on a web page by adding some extra code for three-dimensional graphics to the already familiar HTML and JS, and WebGL is embedded in a browser and can be used directly without installing plug-ins and libraries. OpenGL ES is a subset of a special version developed by OpenGL in order to meet the requirements of embedded devices; WebGL is obtained by combining JavaScript and OpenGL ES 2.0 together and adding one JavaScript binding of OpenGL ES 2.0 for the purpose of webpage rendering effect. Firstly, a JavaScript program is operated, a webgl related method is called, then a vertex shader and a fragment shader point to draw in a color buffer area, the drawing area is emptied, and finally the content of the color buffer area is automatically displayed on a canvas of a browser. The OpenGL specification used at the back end supports good Shader coloring effect, draws and fuses material diversification, supports 100% of material formats based on a browser rendering mode, and synthesizes the target image-text material with good effect.
Fig. 8 is a schematic structural diagram of a composition rendering engine in an embodiment of the present application, configured to perform image-text material composition, where the composition rendering engine may run on a browser of a client. Firstly, entering a rendering inlet, downloading template configuration information, wherein the template configuration information is downloaded from a template configuration center, and the downloaded template configuration information can be selected by an object or recommended to the object by a composite drawing engine; then, analyzing the template configuration information, pulling the resource file, traversing the resource file and creating a corresponding entity; and finally, entering a drawing process, adjusting the drawing attribute of the entity according to the rendering configuration information, drawing the entity after the adjustment is finished, wherein the drawn entity comprises picture materials, a pattern, fonts and the like, and synthesizing the drawn entity to obtain the target image-text materials.
Referring to fig. 9, which is a detailed schematic flow chart of an image-text material synthesizing method in the embodiment of the present application, the method is divided into the following two parts according to different operating environments:
js runtime (initial code runtime):
(1) downloading the template configuration information, analyzing the template configuration information, and downloading the resource file according to the resource file address provided by the template configuration information;
(2) then writing the resource file into a memory of the virtual file system through an fs module of the Emscript, generating rendering configuration information according to a path of the resource file in the virtual file system and rendering information filled in by an administrator, and writing the generated rendering configuration information and template configuration information into the memory;
(3) calling a function 'InitCanvas' of wasm through ccall, and binding canvas wengl context;
(4) calling wasm's rendering entry function "Render" through ccall;
(5) and after the pictures are synthesized, reading the pictures in the memory, and creating blob links for displaying.
wasm runtime (target bytecode runtime):
(1) initializing a rendering canvas (InitCanvas);
(2) creating content header information (EMSCRIPTEN _ WEBGL _ CONTEXT _ history);
(3) rendering an entry function "Render";
(4) reading rendering configuration information and template configuration information;
(5) proxying to a virtual file system;
(6) traversing a material entity, drawing a picture entity by OpenGL and drawing a character entity by Skia;
(7) and writing the content by the composite picture.
Referring to fig. 10, which is a schematic diagram illustrating a relationship between code and platform in an embodiment of the present application, after source code is compiled into WASM by the method in the present application, a special bytecode between a high-level language and machine code is generated, and the bytecode is not directly associated with the platform and can be run on different platforms, for example, X64, X86, and ARM in fig. 10.
In the embodiment of the application, based on WebAssembly packaging, the engine source code is small in size and improves performance, a set of code of a front end and a back end is set with logic, the back end is changed and compiled into the wap, the front end can use the wap, and a browser, namely a computing resource of a client, is used without time consumption of front end and back end requests, so that the server cost is reduced.
Referring to fig. 11, which is a schematic diagram of another target teletext material in the embodiment of the application, a teletext material composition method according to the application is described below with reference to fig. 11. Firstly, inputting a base map as shown in fig. 12 into an object floret, generating a resource file 4 by a synthesis drawing engine according to the input base map and rendering information, downloading template configuration information from a template configuration center, downloading a resource file 5 according to the template configuration information, wherein the resource file 5 comprises a text decoration material 1 and a text decoration material 2, then creating a picture entity 1, a text entity 1 and a text entity 2, respectively adjusting the drawing attributes of the picture entity 1, the text entity 1 and the text entity 2 according to the rendering configuration information of the resource file, drawing the entities in a rendering canvas according to the obtained target drawing attributes after the adjustment is completed, and synthesizing the drawn entities to obtain the target image-text material as shown in fig. 11.
In the embodiment of the application, the preset pictures and the preset documents are configured through the WebAssembly to generate the pictures rapidly, or the videos and the explanation documents are uploaded, so that the synchronous live-broadcast-version curriculum videos with the explanation documents can be generated in an intelligent matching manner, the synthesis speed can be greatly increased by using the WebAssembly to realize the front end and the back end on the Web, the synthesized final materials are synthesized by using a single terminal, the compatibility is high, and the synthesis error rate is extremely low. The processing end is arranged on the same side by using the WebAssembly mode, compatibility and performance are improved, drawing and fusing materials are diversified, a 100% material format is supported under the browser rendering mode, computing resources of a client are used instead of a server, service cost is reduced, high service availability is improved, and the engine source code is small in size and performance is improved based on WebAssembly packaging.
In addition, in the embodiment of the application, an entity corresponding to the resource file is created according to the preconfigured resource file, then the target drawing attribute of the entity is obtained according to the rendering configuration information of the resource file, finally, the corresponding picture entity and the corresponding text entity are drawn in the rendering canvas according to the resource file and the target drawing attribute, the target image-text material is automatically synthesized based on the picture entity and the text entity, image-text material synthesis is performed based on the mode, the picture entity and the text entity are drawn based on the preconfigured resource file and automatically synthesized, manual adjustment is not needed, and the material synthesis efficiency is effectively improved.
Based on the same inventive concept, the embodiment of the application also provides an image-text material synthesizing device. As shown in fig. 13, it is a schematic structural diagram of a graphics context material composition apparatus 1300, which may include:
a creating unit 1301, configured to create an entity corresponding to a resource file according to a preconfigured resource file, where the resource file includes a decoration resource file and a picture resource file to be synthesized;
an obtaining unit 1302, configured to obtain a target drawing attribute of an entity according to rendering configuration information of a resource file;
the drawing unit 1303 is configured to draw corresponding picture entities and text entities in the rendering canvas according to the resource file and the target drawing attribute;
and a synthesizing unit 1304, configured to synthesize the target image-text material based on the image entity and the text entity.
Optionally, the image-text material composition apparatus further includes a compiling unit 1305, configured to:
compiling the resource file into a target byte code format;
the obtaining unit 1302 is specifically configured to:
in a target bytecode operation environment, obtaining a target drawing attribute of an entity according to rendering configuration information of a resource file;
the drawing unit 1303 is specifically configured to:
and drawing corresponding picture entities and character entities in the rendering canvas according to the resource files in the target byte code format and the target drawing attributes.
Optionally, the creating unit 1301 is specifically configured to:
in an initial code running environment, analyzing pre-configured template configuration information to obtain first address information of a resource file, and downloading the resource file according to the first address information, wherein the resource file is in an initial code format;
and traversing the resource file to create an entity corresponding to the resource file.
Optionally, the apparatus further includes a saving unit 1306:
storing the resource file to a virtual file system, and generating storage information of the resource file;
and generating rendering configuration information of the resource file according to the storage information and the pre-configured rendering information, and storing the pre-configured template configuration information and the pre-configured rendering configuration information to the virtual file system.
Optionally, the target rendering attribute at least includes a position and a size of the entity; the obtaining unit 1302 is specifically configured to:
in a target byte code operating environment, acquiring second address information of a resource file in a virtual file system;
and acquiring corresponding rendering configuration information in the virtual file system according to the second address information, and acquiring the position and the size of the entity according to the rendering configuration information.
Optionally, the apparatus further comprises a presentation unit 1307, configured to:
and storing the target image-text material to a virtual file system, and creating a display link so as to display the target image-text material according to the display link.
Optionally, the storage information includes index information and length information of the resource file in the virtual file system.
Optionally, the apparatus further includes a first invoking unit 1308, configured to:
in the initial code running environment, calling an initialization canvas function and binding a target graphic library interface so that the rendering canvas calls a network graphic library WebGL in the target bytecode running environment;
the drawing unit is specifically configured to:
and in the rendering canvas, drawing a corresponding picture entity by calling WebGL.
Optionally, the apparatus further includes a second calling unit 1309, configured to:
and calling a rendering entry function in the initial code running environment so as to draw corresponding picture entities and text entities in the target bytecode running environment based on the rendering entry function.
Optionally:
the client establishes an entity corresponding to the resource file according to the pre-configured resource file;
the client acquires the target drawing attribute of the entity according to the rendering configuration information of the resource file in the target byte code format;
the client side draws corresponding picture entities and character entities in the rendering canvas according to the target drawing attributes;
and the client synthesizes the target image-text material based on the image entity and the text entity.
In the embodiment of the application, an entity corresponding to a resource file is created according to a pre-configured resource file, then a target drawing attribute of the entity is obtained according to rendering configuration information of the resource file, finally, a corresponding picture entity and a corresponding text entity are drawn in a rendering canvas according to the resource file and the target drawing attribute, a target picture and text material is automatically synthesized based on the picture entity and the text entity, picture and text material synthesis is carried out based on the mode, the picture entity and the text entity are drawn based on the pre-configured resource file and are automatically synthesized, manual adjustment is not needed, and material synthesis efficiency is effectively improved.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same one or more pieces of software or hardware when the application is implemented.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
The electronic equipment is based on the same inventive concept as the method embodiment, and the embodiment of the application also provides the electronic equipment. In one embodiment, the electronic device may be a server, such as server 120 shown in FIG. 1. In this embodiment, the structure of the electronic device can be as shown in fig. 14, including a memory 1401, a communication module 1403 and one or more processors 1402.
A memory 1401 for storing computer programs executed by the processor 1402. The memory 1401 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 1401 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1401 may also be a non-volatile memory (non-volatile memory), such as a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD); or the memory 1401 is any other medium that can be used to carry or store a desired computer program in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1401 may be a combination of the above memories.
The processor 1402 may include one or more Central Processing Units (CPUs), or be a digital processing unit, etc. The processor 1402 is used for implementing the above-mentioned graphics and text material composition method when calling the computer program stored in the memory 1401.
The communication module 1403 is used for communicating with the terminal device and other servers.
The embodiment of the present application does not limit the specific connection medium among the memory 1401, the communication module 1403 and the processor 1402. In fig. 14, the memory 1401 and the processor 1402 are connected by a bus 1404, the bus 1404 is depicted by a thick line in fig. 14, and the connection manner between other components is merely illustrative and not limited. The bus 1404 may be divided into an address bus, a data bus, a control bus, and the like. For ease of description, only one thick line is depicted in fig. 14, but not only one bus or one type of bus.
The memory 1401 stores a computer storage medium, and the computer storage medium stores computer executable instructions, which are used to implement the method for synthesizing the image-text material according to the embodiment of the present application. The processor 1402 is configured to perform the above-described teletext material composition method, as shown in fig. 2.
In another embodiment, the electronic device may also be other electronic devices, such as the terminal device 110 shown in fig. 1. In this embodiment, the structure of the electronic device may be as shown in fig. 15, including: communications component 1510, memory 1520, display unit 1530, camera 1540, sensors 1550, audio circuitry 1560, bluetooth module 1570, processor 1580, and the like.
The communication component 1510 is used to communicate with a server. In some embodiments, a Wireless Fidelity (WiFi) module may be included, the WiFi module being a short-range Wireless transmission technology, through which the electronic device may help the user to transmit and receive information.
The memory 1520 may be used to store software programs and data. The processor 1580 performs various functions of the terminal device 110 and data processing by executing software programs or data stored in the memory 1520. The memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Memory 1520 stores an operating system that enables terminal device 110 to operate. The memory 1520 may store an operating system and various application programs, and may also store a computer program for executing the method for synthesizing the image and text materials in the embodiment of the present application.
The display unit 1530 may also be used to display information input by the user or information provided to the user and a Graphical User Interface (GUI) of various menus of the terminal device 110. Specifically, the display unit 1530 may include a display screen 1532 disposed on the front surface of the terminal device 110. The display screen 1532 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 1530 may be used to display a teletext material composition user interface or the like in the embodiment of the present application.
The display unit 1530 may also be used to receive input numeric or character information, generate signal inputs related to user settings and function control of the terminal device 110, and particularly, the display unit 1530 may include a touch screen 1531 disposed on the front surface of the terminal device 110, and may collect touch operations of the user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The touch screen 1531 may cover the display screen 1532, or the touch screen 1531 and the display screen 1532 may be integrated to implement the input and output functions of the terminal device 110, and after the integration, the touch screen may be referred to as a touch display screen for short. The display unit 1530 in this application may display the application programs and the corresponding operation steps.
Camera 1540 may be used to capture still images and the user may post comments on the images captured by camera 1540 through the application. The number of the camera 1540 may be one or more. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals, which are then passed to a processor 1580 for conversion into digital image signals.
The terminal device may further comprise at least one sensor 1550, such as an acceleration sensor 1551, a distance sensor 1552, a fingerprint sensor 1553, a temperature sensor 1554. The terminal device may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
Audio circuit 1560, speaker 1561, microphone 1562 may provide an audio interface between a user and terminal device 110. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into an audio signal by the speaker 1561 and output the audio signal. Terminal device 110 may also be configured with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 1562 converts the collected sound signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which is output to the communication module 1510 for transmission to, for example, another terminal device 110, or the memory 1520 for further processing.
The bluetooth module 1570 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal device may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that is also equipped with a bluetooth module via the bluetooth module 1570, so as to perform data interaction.
The processor 1580 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs stored in the memory 1520 and calling data stored in the memory 1520. In some embodiments, the processor 1580 may include one or more processing units; the processor 1580 may also integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a baseband processor, which primarily handles wireless communications. It is to be appreciated that the baseband processor may not be integrated into the processor 1580. The processor 1580 in the present application may run an operating system, an application program, a user interface display, a touch response, and the method for synthesizing the image-text material according to the embodiment of the present application. Further, the processor 1580 is coupled with the display unit 1530.
In some possible embodiments, the aspects of the teletext material composition method provided by the application may also be implemented in the form of a program product comprising a computer program for causing an electronic device to perform the steps of the teletext material composition method according to various exemplary embodiments of the application described above in this specification when the program product is run on the electronic device, e.g. the electronic device may perform the steps as shown in fig. 2 or fig. 4.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include a computer program, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
The computer program embodied on the readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer programs for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer program may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of a remote electronic device, the remote electronic device may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (for example, through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having a computer-usable computer program embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (15)

1. A method for synthesizing image-text materials is characterized by comprising the following steps:
creating an entity corresponding to the resource file according to a pre-configured resource file, wherein the resource file comprises a decoration resource file and a picture resource file to be synthesized;
obtaining the target drawing attribute of the entity according to the rendering configuration information of the resource file;
drawing corresponding picture entities and character entities in a rendering canvas according to the resource files and the target drawing attributes;
and synthesizing a target image-text material based on the image entity and the text entity.
2. The method of claim 1, after creating the entity to which the resource file corresponds from the preconfigured resource file, further comprising:
compiling the resource file into a target byte code format;
the obtaining the target drawing attribute of the entity according to the rendering configuration information of the resource file includes:
in a target bytecode operation environment, obtaining a target drawing attribute of the entity according to the rendering configuration information of the resource file;
the drawing corresponding picture entities and character entities in a rendering canvas according to the resource files and the target drawing attributes comprises:
and drawing corresponding picture entities and character entities in a rendering canvas according to the resource files in the target byte code format and the target drawing attributes.
3. The method of claim 1, wherein the creating an entity corresponding to the resource file from the preconfigured resource file comprises:
in an initial code running environment, analyzing pre-configured template configuration information to obtain first address information of the resource file, and downloading the resource file according to the first address information, wherein the resource file is in an initial code format;
and traversing the resource file and creating an entity corresponding to the resource file.
4. The method of claim 2, wherein after the compiling the resource file into the target bytecode format, before the obtaining the target rendering properties of the entity according to the rendering configuration information of the resource file, further comprising:
saving the resource file to a virtual file system, and generating storage information of the resource file;
and generating rendering configuration information of the resource file according to the storage information and pre-configured rendering information, and storing pre-configured template configuration information and the rendering configuration information to the virtual file system.
5. The method of claim 4, wherein the target rendering attributes include at least a position and a size of the entity; in the target bytecode execution environment, obtaining the target rendering attribute of the entity according to the rendering configuration information of the resource file, including:
in the target bytecode operation environment, acquiring second address information of the resource file in the virtual file system;
and acquiring the corresponding rendering configuration information in the virtual file system according to the second address information, and acquiring the position and the size of the entity according to the rendering configuration information.
6. The method of claim 1, after said synthesizing target teletext material based on said pictorial entity and said textual entity, further comprising:
and storing the target image-text material to a virtual file system, and creating a display link so as to display the target image-text material according to the display link.
7. The method of claim 4, wherein the storage information comprises index information and length information of the resource file in the virtual file system.
8. The method of any one of claims 1 to 7, wherein before the rendering the corresponding photo entity and text entity in the rendering canvas according to the resource file and the target rendering attribute in the target bytecode format, further comprising:
in an initial code running environment, calling an initialization canvas function and binding a target graphic library interface so that the rendering canvas calls a network graphic library WebGL in a target bytecode running environment;
the drawing of the corresponding photo entity in the rendering canvas comprises:
and drawing a corresponding picture entity by calling WebGL in the rendering canvas.
9. The method according to any one of claims 1 to 7, further comprising, before the rendering the corresponding photo entity and text entity in the rendering canvas according to the resource file and the target rendering attribute in the target bytecode format, the steps of:
and calling a rendering entry function in the initial code running environment so as to draw corresponding picture entities and character entities in the target bytecode running environment based on the rendering entry function.
10. The method of any one of claims 1 to 7, wherein:
the client creates an entity corresponding to the resource file according to the pre-configured resource file;
the client acquires the target drawing attribute of the entity according to the rendering configuration information of the resource file in the target byte code format;
the client side draws corresponding picture entities and character entities in a rendering canvas according to the target drawing attributes;
and the client synthesizes the target image-text material based on the image entity and the text entity.
11. An apparatus for composing a text material, comprising:
the system comprises a creating unit, a processing unit and a processing unit, wherein the creating unit is used for creating an entity corresponding to a resource file according to a pre-configured resource file, and the resource file comprises a decoration resource file and a picture resource file to be synthesized;
the acquisition unit is used for acquiring the target drawing attribute of the entity according to the rendering configuration information of the resource file;
the drawing unit is used for drawing corresponding picture entities and character entities in a rendering canvas according to the resource files and the target drawing attributes;
and the synthesis unit is used for synthesizing the target image-text material based on the image entity and the text entity.
12. The apparatus of claim 11, wherein the apparatus further comprises a compiling unit to:
compiling the resource file into a target byte code format;
the obtaining unit is specifically configured to:
in a target bytecode operation environment, obtaining a target drawing attribute of the entity according to the rendering configuration information of the resource file;
the drawing unit is specifically configured to:
and drawing corresponding picture entities and character entities in a rendering canvas according to the resource files in the target byte code format and the target drawing attributes.
13. An electronic device, comprising a processor and a memory, wherein the memory stores a computer program which, when executed by the processor, causes the processor to carry out the steps of the method of any of claims 1 to 10.
14. A computer-readable storage medium, characterized in that it comprises a computer program for causing an electronic device to carry out the steps of the method according to any one of claims 1 to 10, when said computer program is run on said electronic device.
15. A computer program product, comprising a computer program stored in a computer readable storage medium; when a processor of an electronic device reads the computer program from the computer-readable storage medium, the processor executes the computer program, causing the electronic device to perform the steps of the method of any of claims 1-10.
CN202210534239.1A 2022-05-17 2022-05-17 Image-text material synthesis method and device, electronic equipment and storage medium Pending CN115131470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210534239.1A CN115131470A (en) 2022-05-17 2022-05-17 Image-text material synthesis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210534239.1A CN115131470A (en) 2022-05-17 2022-05-17 Image-text material synthesis method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115131470A true CN115131470A (en) 2022-09-30

Family

ID=83376952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210534239.1A Pending CN115131470A (en) 2022-05-17 2022-05-17 Image-text material synthesis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115131470A (en)

Similar Documents

Publication Publication Date Title
CN107992301B (en) User interface implementation method, client and storage medium
CN111026396B (en) Page rendering method and device, electronic equipment and storage medium
WO2018050003A1 (en) 3d canvas web page element rendering method and apparatus, and electronic device
CN103713891B (en) It is a kind of to carry out the method and apparatus that figure renders on the mobile apparatus
US20120066304A1 (en) Content configuration for device platforms
CN104995601B (en) It is switched to the machine Web page application program and is switched away from from the machine Web page application program
CN110309451A (en) A kind of web preloads the generation method and device of the page
CN110908712A (en) Data processing method and equipment for cross-platform mobile terminal
CN104216691A (en) Application creating method and device
WO2017105769A1 (en) Technologies for native game experience in web rendering engine
US10719303B2 (en) Graphics engine and environment for encapsulating graphics libraries and hardware
CN113411664B (en) Video processing method and device based on sub-application and computer equipment
CN110874217A (en) Interface display method and device for fast application and storage medium
WO2019238145A1 (en) Webgl-based graphics rendering method, apparatus and system
CN111026490A (en) Page rendering method and device, electronic equipment and storage medium
CN111782192B (en) Method for developing mobile office system in mixed mode
CN110865863B (en) Interface display method and device for fast application and storage medium
CN110851240B (en) Function calling method, device and storage medium
CN112217671A (en) Method and equipment for performing interface display on front-end micro-service based on split deployment
CN113326043B (en) Webpage rendering method, webpage manufacturing method and webpage rendering system
CN110908629A (en) Electronic equipment operation method and device, electronic equipment and storage medium
CN115131470A (en) Image-text material synthesis method and device, electronic equipment and storage medium
Nagesh et al. Cross-platform mobile application development
Mikkonen et al. Lively for Qt: A platform for mobile web applications
CN113301436A (en) Play control method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination