CN115393472B - Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product - Google Patents

Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product Download PDF

Info

Publication number
CN115393472B
CN115393472B CN202211064403.3A CN202211064403A CN115393472B CN 115393472 B CN115393472 B CN 115393472B CN 202211064403 A CN202211064403 A CN 202211064403A CN 115393472 B CN115393472 B CN 115393472B
Authority
CN
China
Prior art keywords
information
canvas
area
text
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211064403.3A
Other languages
Chinese (zh)
Other versions
CN115393472A (en
Inventor
陈逸帆
刘超
张超
陈飞
车文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Shurui Data Technology Co ltd
Original Assignee
Nanjing Shurui Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Shurui Data Technology Co ltd filed Critical Nanjing Shurui Data Technology Co ltd
Priority to CN202211064403.3A priority Critical patent/CN115393472B/en
Publication of CN115393472A publication Critical patent/CN115393472A/en
Application granted granted Critical
Publication of CN115393472B publication Critical patent/CN115393472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present disclosure disclose canvas processing methods, apparatuses, electronic devices, readable media, and program products. One embodiment of the method comprises the following steps: carrying out partition identification on canvas to be identified; content recognition is carried out on the area corresponding to each piece of sub-canvas area information in the sub-canvas area information set; determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; adding the scaled text to the filled information filling area; and combining the obtained filled sub-canvas sets to obtain a combined canvas. The embodiment realizes the effective reading of the information in the canvas.

Description

Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to canvas processing methods, apparatuses, electronic devices, readable media, and program products.
Background
Canvas processing refers to a technique for processing a canvas so that the canvas may accommodate devices of different display sizes. Currently, when processing canvas, the following methods are generally adopted: the canvas is directly scaled so that the canvas is displayed on devices of different display sizes.
However, when the above manner is adopted, there are often the following technical problems:
first, directly scaling the canvas can cause blurring of information (e.g., text information) within the canvas, thereby affecting efficient reading of the information within the canvas;
secondly, the common canvas often comprises a plurality of partitions, the style setting and the content of the information in different partitions are often different, and the content in the canvas cannot be effectively identified by adopting a conventional identification mode.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose canvas processing methods, apparatuses, electronic devices, readable media, and program products to address one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a canvas processing method, the method comprising: acquiring canvas to be identified; carrying out partition recognition on the canvas to be recognized to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information; performing content recognition on an area corresponding to each piece of canvas area information in the piece of canvas area information set to generate area content information, and obtaining an area content information set, wherein the area content information in the area content information set comprises: content text, text color information, text ground color information and region ground color information; for each region content information in the above region content information set, the following processing steps are performed: determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information; and combining the obtained filled sub-canvas sets to obtain a combined canvas, and displaying the combined canvas on the information display device.
In a second aspect, some embodiments of the present disclosure provide a canvas processing apparatus, the apparatus comprising: an acquisition unit configured to acquire a canvas to be identified; the partition identification unit is configured to perform partition identification on the canvas to be identified to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information; the content recognition unit is configured to perform content recognition on the region corresponding to each piece of canvas region information in the piece of canvas region information set to generate region content information, and obtain a piece of region content information set, wherein the region content information in the piece of region content information set comprises: content text, text color information, text ground color information and region ground color information; a processing unit configured to perform the following processing steps for each region content information in the region content information set described above: determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information; and the canvas combining and displaying unit is configured to combine the obtained filled sub-canvas sets to obtain a combined canvas, and display the combined canvas on the information display device.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
In a fifth aspect, some embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the canvas processing method of some embodiments of the present disclosure, efficient reading of information within a canvas is achieved. Specifically, the reason why the information in the canvas cannot be effectively read is that: to ensure that the canvas is displayed on devices of different display sizes, the canvas is typically scaled directly. However, scaling the canvas directly can cause information (e.g., text information) within the canvas to be obscured, thereby affecting efficient reading of the information within the canvas. Based on this, canvas processing methods of some embodiments of the present disclosure first acquire a canvas to be identified. Then, carrying out partition recognition on the canvas to be recognized to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information. Further, performing content recognition on an area corresponding to each piece of canvas area information in the piece of canvas area information set to generate area content information, and obtaining an area content information set, wherein the area content information in the area content information set comprises: content text, text color information, text ground color information, and region ground color information. In actual conditions, the canvas often contains a plurality of partitions, and the information contained in different partitions is different, and the partitions are used as identification units, so that the condition of information disorder caused by integral identification can be effectively ensured to occur. Further, for each region content information in the above region content information set, the following processing steps are performed: determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; and adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information. In practice, the display sizes of different display devices tend to be different, and thus the present disclosure adaptively scales and fills in information within a partition. The problem of blurring of information (e.g., text information) within the canvas caused by scaling the natural colors of the canvas directly is avoided. And finally, combining the obtained filled sub-canvas sets to obtain a combined canvas, and displaying the combined canvas on the information display equipment. By the method, the information in the canvas can be effectively read.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of canvas processing methods according to the present disclosure;
FIG. 2 is a schematic diagram of a canvas to be identified;
FIG. 3 is a schematic diagram of the structure of some embodiments of canvas processing apparatuses according to the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to FIG. 1, a flow 100 is shown in accordance with some embodiments of canvas processing methods of the present disclosure. The canvas processing method comprises the following steps:
Step 101, obtaining canvas to be identified.
In some embodiments, the execution body of the canvas processing method (e.g., a computing device) may obtain the canvas to be identified by way of a wired connection, or a wireless connection. Wherein, the canvas to be identified can be a business canvas to be identified.
As an example, the canvas to be identified may be as shown in fig. 2, where the canvas to be identified in fig. 2 may include: 9 partitions. The locations of the partitions among the 9 partitions, and the information within the partitions are different.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
The computing device may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein. It should be appreciated that the number of computing devices may have any number of computing devices, as desired for implementation.
And 102, carrying out partition identification on the canvas to be identified to obtain a sub-canvas area information set.
In some embodiments, the execution body may partition and identify the canvas to be identified to obtain the sub-canvas area information set. Wherein, the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information. Wherein the child canvas area information characterizes the partition in the canvas to be identified. The region duty ratio information characterizes the duty ratio of the region corresponding to the region information of the sub-canvas to the canvas to be identified. The region position information characterizes the position of the partition corresponding to the region information of the sub-canvas in the canvas to be identified.
As an example, the canvas to be identified as shown in fig. 2, wherein the canvas to be identified in fig. 2 includes 9 partitions. The number of child canvas area information in the generated set of child canvas area information is 9.
As yet another example, the executing body may determine the boundaries of the partitions in the canvas to be identified through an edge detection algorithm based on a Sober operator, so as to implement partition identification of the canvas to be identified, and further generate the sub-canvas area information set. In practice, the child canvas area information may be:
{
Area ratio information: {
The length ratio is as follows: 1/5;
the width ratio is as follows: 2/3;
},
regional location information: {
Corner coordinates: [ (0, 0), (1, 2) ];
center point coordinates: (1/2, 1);
}
}。
the long duty ratio represents the proportion of the long side of the region corresponding to the region information of the child canvas to the long side of the canvas to be identified. The "width ratio" characterizes the ratio of the broadside of the region corresponding to the region information of the child canvas to the broadside of the canvas to be identified. The above-mentioned "corner coordinates" may include coordinates of an upper left corner and coordinates of a lower right corner of the region corresponding to the child canvas region information. For example, the coordinates of the upper left corner point may be (0, 0). The coordinates of the lower right corner point may be (1, 2). The above-mentioned "center point coordinates" represent coordinates of the center point of the region corresponding to the child canvas region information. For example, the coordinates of the center point may be (1/2, 1).
In some optional implementations of some embodiments, the executing body performs partition recognition on the canvas to be recognized to obtain a child canvas area information set, and may include the following steps:
and firstly, performing color block recognition on the canvas to be recognized through a color block recognition model included in a pre-trained partition recognition model so as to generate color block feature vectors, and obtaining a color block feature vector sequence.
The color lump identification model can be a model for identifying color lump in canvas to be identified.
As an example, the patch identification model described above may be a cascaded neural network model comprising a plurality of convolutional layers. The patch recognition model is a feature pyramid model. That is, the convolution kernels of the plurality of convolution layers included in the patch identification model decrease in order.
And secondly, the execution main body can conduct boundary recognition on the canvas to be recognized through a boundary recognition model included in the partition recognition model so as to generate boundary feature vectors, and a boundary feature vector sequence is obtained.
The boundary recognition model may be a model for recognizing a boundary in the canvas.
As an example, the above-described boundary recognition model may be an ImageNet model.
And thirdly, inputting the color block feature vector sequence and the boundary feature vector sequence into a partition positioning model included in the partition recognition model to generate the sub-canvas area information set.
By way of example, the above-described zonal localization model may be a plurality of serial convolutional neural network models.
The color block recognition model and the boundary recognition model can use the same training sample but different samples with different sample labels for parallel model training. In practical situations, the model parameters of the large-scale neural network model are very large, a linear model training mode is adopted, the training period is long, and the training period is increased when the model needs to be finely tuned. Based on this, the present disclosure combines the idea of modularization, and performs modularization division on the partition identification model according to functions, that is, division into a color patch identification model, a boundary detection model, and a partition positioning model. By training the small models in parallel and combining the trained models, the training speed of the models is greatly improved on the premise of ensuring the model accuracy.
And step 103, carrying out content recognition on the region corresponding to each piece of sub-canvas region information in the sub-canvas region information set to generate region content information, and obtaining a region content information set.
In some embodiments, the execution body may perform content recognition on an area corresponding to each piece of canvas area information in the set of sub canvas area information to generate area content information, thereby obtaining the set of area content information. Wherein the regional content information in the regional content information set includes: content text, text color information, text ground color information, and region ground color information. Wherein the content text is text in an area corresponding to the area content information. The text color information characterizes a text color of the content text. The text ground color information characterizes the ground color of the content text. The region ground color information characterizes a region ground color in a region corresponding to the region content information.
The execution body described above may perform content recognition of the region corresponding to each of the sub-canvas region information in the set of sub-canvas region information by way of example through the YOLO (You Only Look Once) model to generate region content information.
In some optional implementations of some implementations, the executing body performs content recognition on the region corresponding to each piece of canvas region information in the set of child canvas region information to generate region content information, and may include the following steps:
Firstly, extracting the regional information features corresponding to the regional information of the child canvas by a feature rough extraction model included in a pre-trained content recognition model so as to generate regional feature vectors.
The characteristic rough extraction model comprises a plurality of serially connected residual error networks. The residual network includes a plurality of concatenated convolutional layers.
And secondly, inputting the regional feature vector into a feature extraction model in the text recognition model to generate a text feature vector.
The text recognition model is a sub-model in the content recognition model, and the text recognition model is connected with the characteristic rough extraction model. The text recognition model may be composed of a FPN (Feature Pyramid Network ) model and a plurality of serially connected convolutional layers. Wherein a plurality of serially connected convolution layers are connected to the tail of the FPN model.
And thirdly, inputting the text feature vector into a classification model included in the text recognition model to generate the content text included in the regional content information.
Wherein, the classification model can be a multi-classification model.
And a fourth step of generating text color information included in the region content information according to the text feature vector and the text color recognition model included in the text recognition model.
And fifthly, generating text ground color information included in the area content information according to the text feature vector and the text ground color recognition model included in the text recognition model.
The text color recognition model is a CNN (Convolutional Neural Networks, convolutional neural network) model. The text ground color recognition model is also a CNN model. The number of convolution layers in the text color recognition model is greater than the number of convolution layers in the text base color recognition model. The convolution kernel size of the text color recognition model is smaller than the convolution and size of the text ground color recognition model.
And sixthly, removing the region corresponding to the text ground color information included in the region content information from the region corresponding to the region information of the canvas to generate a region to be identified by the color value.
And seventh, determining the color value of the pixel in the area to be identified by the color value so as to generate the area ground color information included in the area content information.
Optionally, the determining, by the executing body, the color value of the pixel in the area to be identified by the color value to generate the area ground color information included in the area content information may include the following steps:
and in the first step, in response to determining that the color values of all pixels in the region to be identified are the same, determining the color values of all pixels as region ground color information included in the region content information.
As an example, the region ground color information may be "#d34899".
And a second step of determining, in response to determining that the color values of the pixels in the region to be identified are different, the color value and the pixel point coordinate set corresponding to the color value as sub-region ground color information in the region ground color information included in the region content information for each color value in the color value set corresponding to the region to be identified.
As an example, the region ground color information may be { "#d34899": [ (x) 1 ,y 1 ),(x 2 ,y 2 )…(x n ,y n )],“#A14C7D”[(x 2 ,y 3 ),(x 4 ,y 2 )…(x n ,y m )}。
The contents in the steps 102 to 103 are used as an invention point of the present disclosure, and solve the second technical problem mentioned in the background art, that is, "the common canvas often includes a plurality of partitions, and the style setting and the content of the information in the different partitions are often different, and the conventional identification manner is adopted, so that the content in the canvas cannot be effectively identified. Based on this, the present disclosure first sets a partition recognition model for partition recognition, and in practical situations, a model structure of a common target detection model is complex, and a more complex model structure is often accompanied by a great number of model parameters. Based on this, the present disclosure combines the idea of modularization, and performs modularization division on the partition identification model according to functions, that is, division into a color patch identification model, a boundary detection model, and a partition positioning model. By training the small models in parallel and combining the trained models, the training speed of the models is greatly improved on the premise of ensuring the model accuracy. Next, considering that the information contained in the canvas is often for aesthetic purposes, the text color, text ground color, and region ground color of the text contained in the canvas are set. To ensure that the identified canvas is set to be the same as the text within the canvas to be identified. Thus, the present disclosure sets a text color model and a text ground color recognition model for recognition of text colors and text ground colors. Meanwhile, the recognition effects of different receptive fields on the characteristics of the text are considered to be different, and the characters in the canvas to be recognized are often smaller, so that the method and the device have the advantages that the characteristic rough extraction model and the characteristic fine extraction model are used for rough extraction and fine extraction of the characteristics of the canvas. In this way, the identification of the content within the canvas is effectively achieved.
Step 104, for each region content information in the region content information set, performing the following processing steps:
step 1041, determining an information filling area according to the area ratio information and the area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information.
In some embodiments, the executing body determines the information filling area according to the area ratio information and the area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information. Wherein the display interface size information characterizes a display interface size of the information display device.
As an example, the display size of the display interface is 1920×1080. The sub-canvas area information corresponding to the area content information may be:
{
area ratio information: {
The length ratio is as follows: 1/5;
the width ratio is as follows: 2/3;
},
regional location information: {
Corner coordinates: [ (0, 0), (1, 2) ];
center point coordinates: (1/2, 1);
}
}。
the determined size of the information filled area is 384 x 720. The coordinates of the upper left corner of the information filled region are (0, 0). The coordinates of the lower right corner of the information filled region are (384,720). The coordinates of the center point of the information filled region are (192, 360).
Step 1042, performing base color filling on the information filling area according to the area base color information included in the area content information, to obtain a filled information filling area.
In some embodiments, the executing body may perform under color filling on the information filling area according to the area under color information included in the area content information, to obtain a filled information filling area.
For example, the region base color information may be "# EDA17C", and the execution subject may base-color fill the information filling region with a color corresponding to "# EDA17C" to obtain a filled information filling region.
In step 1043, text scaling is performed on the content text included in the area content information to generate a scaled text.
In some embodiments, the executing entity may perform text scaling on the content text included in the region content information to generate scaled text.
As an example, the above-described area content information may be "XXXXXXXX". The word size corresponding to the above-mentioned area content information may be "20". First, the execution body may determine a ratio of a size corresponding to the display interface size information and a canvas size of the canvas to be identified as a scaling ratio. Then, the execution body may determine a product value of the scaled and the word size corresponding to the region content information as the word size of the scaled text. For example, the scale may be "1.5", and the word size of the scaled text may be "30".
And step 1044, adding the scaled text to the filled information filling area to obtain a filled child canvas.
In some embodiments, the execution body may add the scaled text to the filled information filling area to obtain the filled child canvas.
And 105, combining the obtained filled sub-canvas sets to obtain a combined canvas, and displaying the combined canvas on the information display device.
In some embodiments, the execution body may combine the resulting populated sub-canvas sets to obtain a combined canvas, and display the combined canvas on the information display device. Wherein, first, the execution main body may perform canvas combination on the obtained filled sub-canvas set according to the relative position of the filled sub-canvas in the filled sub-canvas set to obtain a combined canvas. Then, the execution body may transmit the combined canvas to the information display device in a wireless connection or a wired connection, and display the combined canvas on the information display device.
Optionally, the above-mentioned execution body may further execute the following processing steps:
In response to detecting that the target user performs a point touch operation on the combined canvas displayed on the information display device, and the point touch operation time is longer than a preset threshold value, performing the following content amplifying steps:
and a first sub-step of determining the area where the target user performs the touch operation as a target area.
Wherein the target user may be a user who manipulates the information display apparatus.
As an example, the execution subject may determine an area where the target user performs a touch operation according to a touch electric signal of the target user on the information display apparatus to generate the target area.
And a second sub-step of identifying the content text corresponding to the target area as a target content text.
First, the execution subject may recognize a content text around the target area as the target content text with the target area as a center point.
And a third sub-step of determining an information display area according to the display interface size information and the preset content scaling ratio.
Wherein the preset content scale is a preset scale. The execution body may determine a product value of a size corresponding to the display interface size information and the preset content scaling ratio as the information display area.
And a fourth sub-step of scaling the target content text according to the size of the information display area to generate a text to be displayed.
As an example, first, the execution body may determine a product value of the size of the information presentation area and a preset text scaling value as the scaling value of the target content text. And then, the execution body zooms the target content text according to the zoom value to generate the text to be displayed.
And a fifth substep, filling the text to be displayed into the information display area.
Wherein, the information display area is arranged on the upper side of the combined canvas in a floating way.
In practice, smaller, denser characters can reduce the user's reading efficiency when the combined canvas contains more content text. By identifying the touch operation and amplifying the text near the area where the user performs the touch operation, the local text is amplified, and the reading efficiency of the user for the text is improved.
Optionally, the above-mentioned execution body may further execute the following processing steps:
in response to detecting the above-described sliding operation of the target user with respect to the information presentation area, the following area cancel operation is performed:
And a first sub-step of determining the sliding time of the sliding operation.
As an example, when the target user touches the information display apparatus to generate a touch electric signal, the timer is started, and when the touch electric signal of the information display apparatus disappears, the timer is ended to generate the sliding time period.
And a second sub-step of determining a start position and an end position of the sliding operation.
As an example, the execution subject may determine the start position as a position where the target user touches the information display apparatus to generate a first touch electric signal. The execution body may determine the final position as the final position by touching the position of the last touch signal generated by the information display device by the target user.
And a third sub-step of constructing a fitting straight line corresponding to the sliding operation according to the starting point position and the end point position.
Wherein the execution body can construct a fitting straight line corresponding to the sliding operation according to the starting point position and the end point position through a straight line formula between two points
And a fourth substep, in response to determining that the slope of the fit straight line is positive and the sliding time period is longer than a preset time period, canceling the information display area.
In practice, common cancellation operations are typically: first, the user clicks the cancel button to cancel the display, however, when the area of the information display apparatus is large, the user needs a certain time to move the hand to the cancel button. Secondly, the user clicks the interface to cancel the information display area, and the clicking mode is extremely easy to cause false touch. The two modes are poor in user experience, and according to the method, through the region cancelling operation, a user can execute sliding operation at any position on the interface of the information display device, and the interface can be cancelled when the conditions are met. And the learning cost is lower. Meanwhile, the occurrence of the false touch condition is greatly reduced.
The above embodiments of the present disclosure have the following advantageous effects: by the canvas processing method of some embodiments of the present disclosure, efficient reading of information within a canvas is achieved. Specifically, the reason why the information in the canvas cannot be effectively read is that: to ensure that the canvas is displayed on devices of different display sizes, the canvas is typically scaled directly. However, scaling the canvas directly can cause information (e.g., text information) within the canvas to be obscured, thereby affecting efficient reading of the information within the canvas. Based on this, canvas processing methods of some embodiments of the present disclosure first acquire a canvas to be identified. Then, carrying out partition recognition on the canvas to be recognized to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information. Further, performing content recognition on an area corresponding to each piece of canvas area information in the piece of canvas area information set to generate area content information, and obtaining an area content information set, wherein the area content information in the area content information set comprises: content text, text color information, text ground color information, and region ground color information. In actual conditions, the canvas often contains a plurality of partitions, and the information contained in different partitions is different, and the partitions are used as identification units, so that the condition of information disorder caused by integral identification can be effectively ensured to occur. Further, for each region content information in the above region content information set, the following processing steps are performed: determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; and adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information. In practice, the display sizes of different display devices tend to be different, and thus the present disclosure adaptively scales and fills in information within a partition. The problem of blurring of information (e.g., text information) within the canvas caused by scaling the natural colors of the canvas directly is avoided. And finally, combining the obtained filled sub-canvas sets to obtain a combined canvas, and displaying the combined canvas on the information display equipment. By the method, the information in the canvas can be effectively read.
With further reference to FIG. 3, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a canvas processing apparatus, which apparatus embodiments correspond to those illustrated in FIG. 1, which apparatus is particularly applicable in a variety of electronic devices.
As shown in FIG. 3, the canvas processing apparatus 300 of some embodiments comprises: an acquisition unit 301, a partition identification unit 302, a content identification unit 303, a processing unit 304, and a canvas combination and display unit 305. An acquisition unit 301 configured to acquire a canvas to be identified; the partition identifying unit 302 is configured to identify the to-be-identified canvas in a partition mode to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information; a content identifying unit 303, configured to identify the content of the region corresponding to each piece of canvas region information in the piece of canvas region information set, so as to generate region content information, and obtain a piece of region content information set, where the region content information in the piece of region content information set includes: content text, text color information, text ground color information and region ground color information; a processing unit 304 configured to perform the following processing steps for each region content information in the above-described region content information set: determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information; and a canvas combining and displaying unit 305 configured to combine the obtained filled sub-canvas sets to obtain a combined canvas, and display the combined canvas on the information display device.
It will be appreciated that the elements described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 300 and the units contained therein, and are not described in detail herein.
Referring now to fig. 4, a schematic diagram of an electronic device (e.g., computing device) 400 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 4 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 4 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring canvas to be identified; carrying out partition recognition on the canvas to be recognized to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information; performing content recognition on an area corresponding to each piece of canvas area information in the piece of canvas area information set to generate area content information, and obtaining an area content information set, wherein the area content information in the area content information set comprises: content text, text color information, text ground color information and region ground color information; for each region content information in the above region content information set, the following processing steps are performed: determining an information filling area according to area ratio information and area position information included in the sub-canvas area information corresponding to the display interface size information and the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information; and combining the obtained filled sub-canvas sets to obtain a combined canvas, and displaying the combined canvas on the information display device.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a partition identification unit, a content identification unit, a processing unit, and a canvas combination and display unit. The names of the units are not limited to the unit itself in some cases, for example, the canvas combining and displaying unit may also be described as "a unit that performs canvas combining on the obtained filled sub-canvas set to obtain a combined canvas and displays the combined canvas on the information display device".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
Some embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, implements any of the canvas processing methods described above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (9)

1. A canvas processing method, comprising:
acquiring canvas to be identified;
carrying out partition recognition on the canvas to be recognized to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information;
performing content recognition on an area corresponding to each piece of canvas area information in the piece of canvas area information set to generate area content information, and obtaining an area content information set, wherein the area content information in the area content information set comprises: content text, text color information, text ground color information and region ground color information;
For each region content information in the set of region content information, performing the following processing steps:
determining an information filling area according to the display interface size information and the area ratio information and the area position information included in the sub-canvas area information corresponding to the area content information, wherein the display interface size information characterizes the display interface size of the information display device;
according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained;
text scaling is carried out on the content text included in the regional content information so as to generate a scaled text;
adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information;
and combining the obtained filled sub-canvas sets to obtain a combined canvas, and displaying the combined canvas on the information display equipment.
2. The method of claim 1, wherein the method further comprises:
in response to detecting that a target user performs a point touch operation on the combined canvas displayed on the information display device, and the point touch operation time is longer than a preset threshold value, performing the following content amplifying steps:
determining an area of the target user for executing the touch operation as a target area;
identifying a content text corresponding to the target area as a target content text;
determining an information display area according to the display interface size information and the preset content scaling ratio;
scaling the target content text according to the size of the information display area to generate a text to be displayed;
and filling the text to be displayed into the information display area, wherein the information display area is arranged on the upper side of the combined canvas in a floating manner.
3. The method of claim 2, wherein the method further comprises:
in response to detecting a sliding operation of the target user with respect to the information presentation area, performing the following area cancel operation:
determining a sliding time length of the sliding operation;
determining a start position and an end position of the sliding operation;
Constructing a fitting straight line corresponding to the sliding operation according to the starting point position and the ending point position;
and canceling the information display area in response to determining that the slope of the fit straight line is positive and the sliding time period is longer than a preset time period.
4. The method of claim 3, wherein the partitioning the canvas to be identified results in a set of child canvas area information, comprising:
performing color block recognition on the canvas to be recognized through a color block recognition model included in a pre-trained partition recognition model to generate a color block feature vector, and obtaining a color block feature vector sequence;
carrying out boundary recognition on the canvas to be recognized through a boundary recognition model included in the partition recognition model so as to generate a boundary feature vector and obtain a boundary feature vector sequence;
and inputting the color block feature vector sequence and the boundary feature vector sequence into a partition positioning model included in the partition recognition model to generate the sub-canvas area information set.
5. The method of claim 4, wherein the content identifying of the region corresponding to each of the set of sub-canvas region information to generate region content information comprises:
Coarsely extracting the regional information features corresponding to the regional information of the sub-canvas through a feature coarse extraction model included in a pre-trained content recognition model so as to generate regional feature vectors;
inputting the regional feature vector into a feature fine extraction model in a text recognition model to generate a text feature vector, wherein the text recognition model is a sub-model in the content recognition model, and the text recognition model is connected with the feature coarse extraction model;
inputting the text feature vector into a classification model included in the text recognition model to generate content text included in the regional content information;
generating text color information included in the region content information according to the text feature vector and a text color recognition model included in the text recognition model;
generating text ground color information included in the area content information according to the text feature vector and a text ground color recognition model included in the text recognition model;
removing a region corresponding to text ground color information included in the region content information from a region corresponding to the region information of the child canvas to generate a region to be identified in color values;
And determining the color value of the pixel in the area to be identified by the color value so as to generate the area ground color information included in the area content information.
6. The method of claim 5, wherein the determining the color value of the pixel within the region to be identified to generate the region ground color information included in the region content information comprises:
determining the color value of each pixel as the region ground color information included in the region content information in response to determining that the color value of each pixel in the region to be identified is the same;
and determining the color value and the pixel point coordinate set corresponding to the color value as sub-region ground color information in the region ground color information included in the region content information for each color value in the color value set corresponding to the color value to be identified in response to determining that the color values of the pixels in the region to be identified are different.
7. A canvas processing apparatus, comprising:
an acquisition unit configured to acquire a canvas to be identified;
the partition identification unit is configured to perform partition identification on the canvas to be identified to obtain a sub-canvas area information set, wherein the sub-canvas area information in the sub-canvas area information set comprises: area duty information and area position information;
The content recognition unit is configured to perform content recognition on the region corresponding to each piece of canvas region information in the piece of canvas region information set to generate region content information, and obtain a piece of region content information set, wherein the region content information in the piece of region content information set comprises: content text, text color information, text ground color information and region ground color information;
a processing unit configured to perform the following processing steps for each region content information in the region content information set: determining an information filling area according to the display interface size information and the area ratio information and the area position information included in the sub-canvas area information corresponding to the area content information, wherein the display interface size information characterizes the display interface size of the information display device; according to the region background color information included in the region content information, background color filling is carried out on the information filling region, and a filled information filling region is obtained; text scaling is carried out on the content text included in the regional content information so as to generate a scaled text; adding the scaled text to the filled information filling area to obtain a filled child canvas, wherein the text color of the scaled text is the same as the color corresponding to the text color information included in the area content information, and the text ground color of the scaled text is the same as the ground color corresponding to the text ground color information included in the area content information;
And the canvas combining and displaying unit is configured to combine the obtained filled sub-canvas sets to obtain a combined canvas, and display the combined canvas on the information display equipment.
8. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1 to 6.
9. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1 to 6.
CN202211064403.3A 2022-09-01 2022-09-01 Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product Active CN115393472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211064403.3A CN115393472B (en) 2022-09-01 2022-09-01 Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211064403.3A CN115393472B (en) 2022-09-01 2022-09-01 Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product

Publications (2)

Publication Number Publication Date
CN115393472A CN115393472A (en) 2022-11-25
CN115393472B true CN115393472B (en) 2023-09-15

Family

ID=84125422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211064403.3A Active CN115393472B (en) 2022-09-01 2022-09-01 Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product

Country Status (1)

Country Link
CN (1) CN115393472B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016191681A1 (en) * 2015-05-27 2016-12-01 Solview Systems Ltd. A system and method for generation canvas representations.
CN113420757A (en) * 2021-08-23 2021-09-21 北京每日优鲜电子商务有限公司 Text auditing method and device, electronic equipment and computer readable medium
CN114862720A (en) * 2022-05-25 2022-08-05 南京数睿数据科技有限公司 Canvas restoration method and device, electronic equipment and computer readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100031152A1 (en) * 2008-07-31 2010-02-04 Microsoft Corporation Creation and Navigation of Infinite Canvas Presentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016191681A1 (en) * 2015-05-27 2016-12-01 Solview Systems Ltd. A system and method for generation canvas representations.
CN113420757A (en) * 2021-08-23 2021-09-21 北京每日优鲜电子商务有限公司 Text auditing method and device, electronic equipment and computer readable medium
CN114862720A (en) * 2022-05-25 2022-08-05 南京数睿数据科技有限公司 Canvas restoration method and device, electronic equipment and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于笔画宽度特征和半监督多示例学习的文本区域鉴别方法;吴锐;杜庆安;张博宇;黄庆成;;高技术通讯(02);全文 *

Also Published As

Publication number Publication date
CN115393472A (en) 2022-11-25

Similar Documents

Publication Publication Date Title
US20230394671A1 (en) Image segmentation method and apparatus, and device, and storage medium
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN110298851B (en) Training method and device for human body segmentation neural network
CN114187459A (en) Training method and device of target detection model, electronic equipment and storage medium
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN114898177B (en) Defect image generation method, model training method, device, medium and product
CN112800276B (en) Video cover determining method, device, medium and equipment
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN115393472B (en) Canvas processing method, canvas processing apparatus, electronic device, readable medium and program product
CN115100536B (en) Building identification method and device, electronic equipment and computer readable medium
CN110619028A (en) Map display method, device, terminal equipment and medium for house source detail page
CN111340813B (en) Image instance segmentation method and device, electronic equipment and storage medium
CN113255812B (en) Video frame detection method and device and electronic equipment
CN115393423A (en) Target detection method and device
CN111696041B (en) Image processing method and device and electronic equipment
CN114399696A (en) Target detection method and device, storage medium and electronic equipment
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
CN111612714A (en) Image restoration method and device and electronic equipment
CN111325093A (en) Video segmentation method and device and electronic equipment
CN112395826B (en) Text special effect processing method and device
CN116704473B (en) Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium
CN116630436B (en) Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium
CN116246175B (en) Land utilization information generation method, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant