CN112633038A - Data processing method, data processing device, computer equipment and computer readable storage medium - Google Patents

Data processing method, data processing device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN112633038A
CN112633038A CN201910907266.7A CN201910907266A CN112633038A CN 112633038 A CN112633038 A CN 112633038A CN 201910907266 A CN201910907266 A CN 201910907266A CN 112633038 A CN112633038 A CN 112633038A
Authority
CN
China
Prior art keywords
image
visual component
visual
category
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910907266.7A
Other languages
Chinese (zh)
Inventor
杨卓群
周敬新
张帅
关健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910907266.7A priority Critical patent/CN112633038A/en
Publication of CN112633038A publication Critical patent/CN112633038A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a data processing method, a data processing device, computer equipment and a computer readable storage medium, and belongs to the field of data visualization. The method comprises the following steps: the method comprises the steps that a first image is sent to a server through computer equipment, the server identifies the first image to obtain the category and the position of a visual component in the first image, and the category and the position of the visual component are sent to the computer equipment.

Description

Data processing method, data processing device, computer equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of data visualization, and in particular, to a data processing method, an apparatus, a computer device, and a computer-readable storage medium.
Background
With the explosion of the big data industry, more and more enterprises become aware of the importance of data management and application, especially the data visualization technology for data presentation. Data Visualization (Data Visualization) is a Visualization of Data in a database, is an application of Visualization technology in the field of non-spatial Data, and displays Data and structural relationships thereof in a more intuitive manner through graphics or images. The basic idea is to take a single data item in a database as a single primitive, form a data image by a large number of data sets, and express each attribute value of the data in multiple dimensions to observe and analyze the data more deeply. Through data visualization, the understanding process of multidimensional data is changed into a simple process of looking at colors, distinguishing lengths and heights, and the time required for understanding the data is greatly shortened, so that more and more enterprises begin to use data visualization technology to display, monitor and analyze various operation data.
At present, when a user acquires a screen design drawing with a better design concept, the user hopes to make some adaptive modifications on the basis of the screen design drawing, the user can create a canvas in a data visualization system, and refer to the acquired visual components in the screen design drawing, and from a visual component library preset in the data visualization system, the user can move the corresponding visual components to the canvas in a manual dragging mode to generate a picture layout of the screen design drawing, and then make some modifications on the picture layout to obtain a picture layout with a better display effect, and display data according to the modified picture layout.
In the data processing process, the whole assembly layout is completed in a manual dragging mode, the operation steps of the process are complex, and time cost waste is easily caused.
Disclosure of Invention
The embodiment of the disclosure provides a data processing method, a data processing device, computer equipment and a computer readable storage medium, which can solve the technical problems that operation steps are complicated and time cost is easily wasted in the related art. The technical scheme is as follows:
in a first aspect, a data processing method is provided, the method including:
acquiring a first image, wherein the first image is an image of a screen design drawing;
identifying the visual components in the first image to obtain the category and the position of the visual components in the first image;
and according to the category and the position of the visual component, creating the visual component in the canvas to obtain the picture layout of the target screen.
The method comprises the steps that a computer device sends a first image to a server, the server identifies the first image to obtain the category and the position of a visual component in the first image, and the category and the position of the visual component are sent to the computer device.
In a first possible implementation manner of the first aspect, the identifying a visual component in the first image to obtain a category and a location of the visual component in the first image includes:
and inputting the first image into a visual component classification model, and performing target detection on a plurality of candidate areas in the first image through the visual component classification model to obtain the category and the position of the visual component.
And identifying the visual components in the first image through the visual component classification model to obtain the category and the position of the visual components, and ensuring that the category and the position of the visual components displayed in the subsequent picture layout are correct.
In a second possible implementation form of the first aspect,
after the target detection is performed on the candidate regions in the first image through the visual component classification model, and the category and the position of the visual component are obtained, the method further includes:
when a feedback result of the category of the visual component in the first image is received, the feedback result is sent to a server.
The server updates the visual component classification model based on the feedback result of the category of the visual component so as to improve the accuracy of the visual component classification model in classifying the visual component.
In a third possible implementation form of the first aspect, the method further comprises at least one of:
extracting colors of visual components in the first image;
extracting a background color of the first image;
the creating of the visual component in the canvas according to the category and the position of the visual component and the obtaining of the picture layout of the target screen comprise:
and creating the visual component in the canvas according to the extracted color and the category and the position of the visual component to obtain the picture layout of the target screen.
The color attribute of the visual component is obtained by extracting the background color of the first image and the color of the visual component in the first image, and the subsequent target screen can be generated into a target screen with color rendering, and the color rendering is automatically rendered, so that the waste of time cost is reduced.
In a fourth possible implementation manner of the first aspect, the method further includes:
normalizing the location of the visual component;
the creating of the visual component in the canvas according to the category and the position of the visual component and the obtaining of the picture layout of the target screen comprise:
and according to the category and the normalized position of the visual component, creating the visual component in the canvas to obtain the picture layout of the target screen.
Through the normalized processing of the position of the visual component in the first image, more accurate position information of the visual component is obtained, so that the time for processing the visual component is saved, and the generation of the picture layout is accelerated.
In a fifth possible implementation manner of the first aspect, the normalizing the position of the visual component includes at least one of:
adjusting the boundary of the visual component into a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas;
when there is an area of coverage between any two visual components, the coverage between the any two visual components is eliminated.
Through standardizing the adjustment to the visual assembly, the boundary of the visual assembly is a regular quadrangle, is full of the whole canvas, and does not cover the modules mutually, so that the display effect of the subsequent visual assembly on the canvas is better, and the picture layout of the target screen with better effect is obtained.
In a sixth possible implementation manner of the first aspect, the method further includes:
and carrying out noise reduction processing on the first image.
Through the noise reduction processing, the interference of the image resolution in the first image is eliminated, so that the identification process of the visual component in the first image is more accurate.
In a seventh possible implementation manner of the first aspect, the recognizing the first image to obtain the category and the position of the visual component includes:
obtaining a second image based on the effective area of the first image, wherein the second image comprises image content in the effective area;
and identifying the second image to obtain the category and the position of the visual component.
When the original image is an image containing an effective area, the effective area of the original image needs to be identified to obtain a first image, so that the identification accuracy of the visual component in the image is improved.
In an eighth possible implementation manner of the first aspect, the obtaining a second image based on the effective area of the first image includes any one of:
clipping the effective area of the first image to obtain a second image;
and cutting the effective area of the first image to obtain an effective area image, and stretching the effective area image to obtain a second image.
The first image is obtained by processing the effective area of the original image, so that the interference of the image content outside the effective area in the original image to the subsequent image processing is reduced.
In a second aspect, a data processing method is provided, the method comprising:
receiving a first image sent by computer equipment, wherein the first image is an image of a screen design drawing;
identifying the visual components in the first image to obtain the category and the position of the visual components in the first image;
sending the category and location of the visual component to the computer device.
The method comprises the steps of obtaining a first image by computer equipment, identifying visual components in the first image to obtain the category and the position of the visual components, and creating the visual components in a canvas according to the category and the position of the visual components to obtain the picture layout of a target screen. The process can directly identify the image containing the design drawing, so that the position and the category of the visual component in the image are obtained, the screen layout is generated based on the identification result, manual operation of a user is not needed, and waste of time cost is reduced.
In a first possible implementation manner of the second aspect, the identifying the visual component in the first image to obtain the category and the location of the visual component in the first image includes:
and inputting the first image into a visual component classification model, and performing target detection on a plurality of candidate areas in the first image through the visual component classification model to obtain the category and the position of the visual component.
And identifying the visual components in the first image through the visual component classification model to obtain the category and the position of the visual components, and ensuring that the category and the position of the visual components displayed in the subsequent picture layout are correct.
In a second possible implementation manner of the second aspect, after the performing target detection on multiple candidate regions in the first image through the visual component classification model to obtain the category and the position of the visual component, the method further includes:
and when a feedback result of the computer equipment on the category of the visual component in the first image is received, updating the visual component classification model according to the feedback result.
The server updates the visual component classification model based on the feedback result of the category of the visual component so as to improve the accuracy of the visual component classification model in classifying the visual component.
In a third possible implementation form of the second aspect, the method further comprises at least one of:
extracting colors of visual components in the first image;
extracting a background color of the first image;
the sending the category and location of the visual component to the computer device comprises:
sending the extracted color and the category and location of the visual component to the computer device.
The color attribute of the visual component is obtained by extracting the background color of the first image and the color of the visual component in the first image, and the subsequent target screen can be generated into a target screen with color rendering, and the color rendering is automatically rendered, so that the waste of time cost is reduced.
In a fourth possible implementation manner of the second aspect, the method further includes:
normalizing the location of the visual component;
the sending the category and location of the visual component to the computer device comprises:
and sending the normalized category and the position of the visual component to the computer equipment.
Through the normalized processing of the position of the visual component in the first image, more accurate position information of the visual component is obtained, so that the time for processing the visual component is saved, and the generation of the picture layout is accelerated.
In a fifth possible implementation form of the second aspect, the normalizing the position of the visual component comprises at least one of:
adjusting the boundary of the visual component into a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas;
when there is an area of coverage between any two visual components, the coverage between the any two visual components is eliminated.
Through standardizing the adjustment to the visual assembly, the boundary of the visual assembly is a regular quadrangle, is full of the whole canvas, and does not cover the modules mutually, so that the display effect of the subsequent visual assembly on the canvas is better, and the picture layout of the target screen with better effect is obtained.
In a sixth possible implementation manner of the second aspect, the method further includes:
and carrying out noise reduction processing on the first image.
Through the noise reduction processing, the interference of the image resolution in the first image is eliminated, so that the identification process of the visual component in the first image is more accurate.
In a seventh possible implementation manner of the second aspect, the recognizing the first image to obtain the category and the position of the visual component includes:
obtaining a second image based on the effective area of the first image, wherein the second image comprises image content in the effective area;
and identifying the second image to obtain the category and the position of the visual component.
When the original image is an image containing an effective area, the effective area of the original image needs to be identified to obtain a first image, so that the identification accuracy of the visual component in the image is improved.
In an eighth possible implementation manner of the second aspect, the obtaining a second image based on the effective area of the first image includes any one of:
clipping the effective area of the first image to obtain a second image;
and cutting the effective area of the first image to obtain an effective area image, and stretching the effective area image to obtain a second image.
The first image is obtained by processing the effective area of the original image, so that the interference of the image content outside the effective area in the original image to the subsequent image processing is reduced.
In a third aspect, a data processing apparatus is provided for executing the above data processing method. Specifically, the data processing apparatus includes a functional module for executing the data processing method provided in the first aspect or any one of the optional manners of the first aspect.
In a fourth aspect, a data processing apparatus is provided for performing the above data processing method. In particular, the data processing apparatus comprises functional modules for performing the data processing method provided by the second aspect or any one of the alternatives of the second aspect.
In a fifth aspect, a computer device is provided, which includes one or more processors and one or more memories, and at least one instruction is stored in the one or more memories, and the instruction is loaded and executed by the one or more processors to implement the operations performed by the data processing method.
In a sixth aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the data processing method as described above.
Drawings
Fig. 1 is a specific implementation environment of a data processing method provided by an embodiment of the present disclosure;
FIG. 2 is a block diagram illustrating a computer device 200 according to an example embodiment;
FIG. 3 is a system framework diagram shown in accordance with an exemplary embodiment;
FIG. 4 is an interaction flow diagram illustrating a method of data processing in accordance with an exemplary embodiment;
FIG. 5 is a data diagram illustrating a data processing method according to an exemplary embodiment;
FIG. 6 is a comparison of an original image and a first image shown in accordance with an exemplary embodiment;
FIG. 7 shows the results of locating a target region in a first image after passing target detection;
FIG. 8 illustrates a training process for visual component classification models and a visual component category identification process;
FIG. 9 shows a picture class visual component on the left and an example of a bulletin class visual component on the right;
FIG. 10 is a schematic diagram illustrating module boundary adjustment according to an exemplary embodiment;
FIG. 11 is a schematic diagram illustrating module boundary stretching according to an exemplary embodiment;
FIG. 12 is a schematic diagram illustrating module coverage in accordance with an exemplary embodiment;
FIG. 13 is a schematic diagram illustrating a processing procedure of a first image;
FIG. 14 is a feedback diagram of the visual components;
FIG. 15 is an interaction flow diagram illustrating a method of data processing in accordance with an exemplary embodiment;
FIG. 16 is a diagram of a data processing apparatus provided by an embodiment of the present disclosure;
fig. 17 is a diagram of a data processing apparatus according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a specific implementation environment of a data processing method provided in an embodiment of the present disclosure, and referring to fig. 1, the specific implementation environment of the data processing method includes: a server cluster 101 and a computer device 102.
A server cluster 101, which may include at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster 101 is used to provide background services for applications that support data processing. For example, the server cluster 101 may be configured to provide a function of identifying a category of a visual component in the first image during data processing, and transmit the identification result to the computer device 102 so that the computer device 102 performs presentation based on the identification result.
In one possible implementation, the visual component category may be identified by a trained identification model, and accordingly, the server cluster 101 may include a server 1011 for performing identification and a server 1012 for performing model training, although the two servers may also be implemented on the same set of hardware, which is not limited by the embodiment of the present disclosure.
The computer device 102 is connected to the server cluster 101 through a wireless network or a wired network. The computer device 102 may be at least one of a smartphone, a desktop computer, a tablet computer, and a laptop portable computer. The computer device 102 may serve as an image provider, and a user may import an image on the computer device 102, send the image to the server cluster 101 for identification, and display the image based on an identification result returned by the server cluster 101. Of course, the computer device 102 may also be used as an image provider and an image recognizer independently, that is, a user may import an image on the computer device 102, recognize the image by the computer device 102, and display the image based on the recognition result without performing real-time interaction with the server cluster 101. The server cluster 101 may serve as a provider of a model used in the recognition, and the computer device 102 may download the recognition model provided by the server cluster 101 at any time, so as to implement the image recognition based on the recognition model, and finally perform a presentation based on the recognition result.
Computer device 102 may generally refer to one of a plurality of computer devices, with the embodiment illustrated as computer device 102. Those skilled in the art will appreciate that the number of computer devices described above may be greater or fewer. The number and the type of the computer devices are not limited by the embodiment of the disclosure.
FIG. 2 is a block diagram illustrating a computer device 200 according to an example embodiment. For example, the computer device 200 may be provided as a user-side device or a server. Referring to fig. 2, the computer device 200 includes a processing component 201, which further includes one or more processors, and memory resources, represented by memory 202, for storing program code, such as application programs, that are executable by the processing component 201. The application programs stored in memory 202 may include one or more modules that each correspond to a set of program code. Furthermore, the processing component 202 is configured to execute program code to perform the above described data processing method.
The computer device 200 may also include a power supply component 203 configured to perform power management of the computer device 200, a wired or wireless network interface 204 configured to connect the computer device 200 to a network, and an input output (I/O) interface 205. The computer device 200 may operate based on an operating system, such as Windows Server, stored in the memory 202TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
The flow of data processing will be briefly described below with reference to the flow shown in fig. 3: firstly, a complete design prototype of a design department, a high-definition image of prototype design in the Internet, a template image in a visual platform or a photographed image of a large-screen finished product are taken as input images to be input, the input images are preprocessed by a scene processing function, such as noise processing, and effective area recognition and clipping, and object detection by an object detection function, thereby obtaining the area image, coordinate attribute and category attribute of the visual component, and then carrying out color extraction, layout normalization and other processing by the image processing function, and the data obtained after the above-mentioned treatment is packaged by metadata, and finally can be arranged by computer equipment based on the metadata obtained by packaging, the layout and the components of the development state can be finally displayed through metadata analysis and automatic generation of the components in the arranging process.
In the following, a possible implementation manner of the data processing method is described based on the generation process of the whole screen layout completed by the interaction between the computer device and the server:
referring to fig. 4 and fig. 5, fig. 4 is an interaction flowchart of a data processing method according to an exemplary embodiment, and fig. 5 is a data diagram of a data processing method according to an exemplary embodiment, which specifically includes the following steps:
401. the computer device obtains a first image, which is an image of a screen plan.
The screen design drawing may be a screen design drawing drawn by a user by hand, a prototype image of the screen design drawing, a complete image of the screen design drawing in the internet, a photographed image of a screen picture layout, and the like, which is not limited in the embodiment of the present disclosure.
In step 401, the computer device may obtain an original image by performing operations such as photographing or scanning on a real object of the screen design drawing, and store the original image in the computer device. The computer device is provided with an application program used for automatic generation of a target screen layout, and a user leads an original image stored in the computer device into the application program, so that the computer device can acquire the original image.
In the embodiment of the present disclosure, the electronic image obtained by photographing or scanning may include an area outside the screen design drawing, and therefore, the effective area of the original image is identified and determined.
The above-mentioned identification of the effective region in the original image can be implemented by any of the following methods:
the method I is based on the Canny edge detection to identify the effective area in the original image.
The Canny edge detection is to extract edge information in a picture, an area formed by the edge information is an area with actual image content, namely an effective area, and an area outside the area formed by the edge information is an invalid area.
The first mode specifically includes: and carrying out edge detection on the original image to obtain at least one region, and taking a region with a region shape meeting the target condition in the at least one region as the effective region.
Carrying out gray level processing on an original image to obtain a first gray level image; performing Gaussian filtering on the first gray level image, filtering noise in the image, and performing Canny edge detection on the image after the Gaussian filtering; and screening the quadrilateral area as the area of the paper according to the edge detection result.
In the first method, "Canny edge detection is performed on the gaussian-filtered image" may include the following steps: processing the original image by using a Gaussian filter to obtain a smooth image of the original image, wherein the process filters noise of the original image, possibly amplifies edges of the original image and reduces identification of false edges in the original image to a certain extent; calculating the gradient strength and gradient direction of each pixel point in the original image, and inhibiting a non-maximum value in the original image, thereby eliminating stray response caused by edge detection; detecting a true edge and a false edge of the original image by applying double thresholds, wherein the edge formed by pixel points with gradient values higher than a first threshold is the true edge, and the edge formed by pixel points with gradient values lower than a second threshold is the false edge, wherein the first threshold is larger than the second threshold, and the pixel points with gradient values between the first threshold and the second threshold are restrained; through the process of further detection, the situation that some false edges are also possible true edges is avoided, and therefore the accuracy of edge detection is improved.
And the second method is based on the identification of the effective region with threshold value.
The second mode specifically includes: and carrying out binarization processing on the original image to obtain at least one white area, and taking the white area with the area shape meeting the target condition in the at least one white area as the effective area.
The specific identification method comprises the following steps: obtaining the gray value of each pixel point in the original image according to the gray processing result of the original image, sequencing the gray values of all the pixel points to obtain the median of the gray values, setting the pixel points corresponding to the median of the gray values to be white, setting the pixel points corresponding to the residual gray values to be black, obtaining at least one white area, screening the at least one white area, and taking the white area with the area shape meeting the target condition as an effective area.
In any of the above methods for identifying an effective region, the fact that the shape of the region matches the target condition means that the shape of the region is a quadrangle.
It should be noted that the two effective area identification methods can effectively cover identification of the effective areas of most screen design drawings. When the effective area of the original image of the screen design drawing is identified, any one of the three identification methods may be selected, or the three identification methods may be sequentially used, and when the effective area cannot be identified by the previous identification method, the latter identification method is triggered to identify the effective area.
In the embodiment of the present disclosure, after obtaining the effective area of the original image in the two manners, the user further confirms that the effective area is the correct effective area through an operation on the computer device, and any one of the following manners may be used:
in one implementation, after the computer device determines the effective area of the original image, a mark is made on the original image, the mark may be a graphic frame, the computer device displays the marked original image, when the user determines that the marked effective area is accurate, a confirmation instruction may be triggered through a confirmation operation, and after the computer device receives the confirmation instruction, the identified effective area may be determined as the effective area of the original image, so as to perform subsequent steps based on the effective area.
In one implementation, when the user determines that the marked active area is inaccurate, the active area may be determined by manually marking on a computer device, for example, the manually marking may be: the user marks four points on an original image displayed by the computer equipment through any input equipment, a quadrangle formed by connecting lines of the four points is an effective area of the original image, and the computer equipment determines the effective area of the original image based on point coordinates.
Through the identification of the effective area, the interference of invalid information in the original image can be eliminated, so that the subsequent identification is more accurate, and especially under the scene that the original image is the hand-drawn design image, the influence of non-paper areas, such as a background area shot by photographing, can be eliminated, so that the accuracy of the subsequent identification is greatly improved.
In addition, it should be noted that the step of identifying the effective area is an optional step, and during the execution of the data processing, the step of identifying the effective area may not be executed, but is directly sent to the server based on the acquired image, and of course, whether to execute the step may be further selected based on the judgment of the image, for example, after the computer device acquires the original image, the source type of the original image may be judged first, and if the original image is a photographed image of a hand-drawn design drawing, the step of identifying the effective area is executed; if the original image is a scanned image of a hand-drawn design drawing, the step of identifying the effective area does not need to be executed.
In the embodiment of the present disclosure, after the computer device identifies the effective area of the original image, based on the effective area, the specific method for acquiring the first image may adopt any one of the following implementation manners:
in the first implementation mode, the effective area of the original image is cut to obtain a first image.
After receiving a command for confirming cropping or acquiring an original image after calibrating four points, the computer equipment can directly crop an effective area of the original image to obtain an image of the effective area, wherein the image of the effective area is the first image.
And in the second implementation mode, the effective area of the original image is cut to obtain an effective area image, and the effective area image is stretched to obtain a first image.
After receiving a command for confirming cropping or acquiring an original image after calibrating four points, the computer device crops an effective region of the original image to obtain an image of the effective region, wherein the image of the effective region may not be a regular quadrangle, and therefore, the computer device needs to stretch the image to obtain a stretched image of the effective region, namely the first image.
The stretching processing of the image of the effective area is also perspective transformation of the image of the effective area, and the perspective transformation refers to transformation that a bearing surface (perspective surface) rotates around a trace line (perspective axis) by a certain angle according to a perspective rotation law by utilizing the condition that three points of a perspective center, an image point and a target point are collinear, so that the original projection light beam is damaged, and the projection geometric figure on the bearing surface can still be kept unchanged.
Of course, the step of cropping and stretching the effective area is also an optional step, that is, for the disclosed embodiment, after the computer device acquires the original image, the computer device may perform the identification and detection of the effective area on the original image to obtain the first image, or may directly use the original image as the first image without performing the steps of identifying and cropping the effective area.
For example, as shown in fig. 6, the upper diagram of fig. 6 is a display example of an original image, and the lower diagram is a first image obtained after processing such as identification of an effective area and cropping and stretching, and the first image includes image content in the effective area.
402. The computer device sends the first image to a server.
In step 402, after acquiring the first image, the computer device may automatically send the first image to the server, or send the first image to the server after detecting a confirmation sending instruction of the user, which is not limited in this disclosure.
403. After receiving the first image sent by the computer equipment, the server identifies the visual component in the first image to obtain the category and the position of the visual component in the first image.
In step 403, after receiving the first image sent by the computer device, the server may perform noise reduction processing on the first image because the first image may be interfered by the image resolution and the background color of the image. The noise reduction processing may be gaussian filtering or median filtering to filter noise interference in the first image, which is not limited in the embodiment of the present disclosure. For example, the process of performing noise reduction processing using median filtering is as follows:
carrying out gray processing on the first image to obtain a gray value of each pixel point in the first image after the gray processing, sequencing the gray values of all the pixel points in the area where each pixel point is located to obtain a median of the gray values of all the pixel points in the area, wherein the gray value represented by the median is the gray value of the pixel point.
In an embodiment of the present disclosure, the identifying of the visual component in the first image to obtain the category and the position of the visual component includes the following steps: and inputting the first image into a visual component classification model, and performing target detection on a plurality of candidate areas in the first image through the visual component classification model to obtain the category and the position of the visual component.
The method comprises the steps of identifying visual components in a first image by adopting a target detection technology to obtain the positions and the types of the visual components in the first image, wherein the essence of target detection is that the first image is subjected to multi-target positioning, selecting a plurality of candidate areas in the first image in a sliding window or selective search mode, wherein the candidate areas at least comprise one visual component, and classifying and predicting the visual components in the candidate areas by a visual component classification model to determine the types and the positions of the visual components in the first image. Fig. 7 shows the result of locating the target area in the first image after passing the target detection.
For example, in the embodiment of the present disclosure, a fast-RCNN object detection model and an inceptionV3 visual component classification model are used to classify a visual component in a first image, which specifically includes the following steps: acquiring the position and the corresponding category of a visual component in a first image through a fast-RCNN target detection model; filtering the first image through the detected target area, and extracting an undetected target area and a target area with the detection probability lower than a preset probability; inputting an undetected target region and a target region with a detection probability lower than a preset probability into an inceptionV3 visual component classification model to obtain the positions of the undetected target region and the category of the visual component in the target region with the detection probability lower than the preset probability; and combining the recognition results of the fast-RCNN target detection model and the inceptionV3 visual component classification model to obtain the category and the position of the visual component with higher accuracy.
The visual component classification model may be obtained by training in the server in advance, and fig. 8 shows a training process of the visual component classification model and a recognition process of the visual component category. The user may draw a number of different categories of visual components in advance, which may be in the form of a bar graph, a line graph, a map, a pie graph, a liquid level graph, etc., labeling the plurality of visual components of different categories, for example, labeling the visual components by LabelImg (Picture labeling tool) to obtain a labeled file, wherein the format of the labeled file can be XML format, performing data enhancement on the plurality of visual components of different categories, the data enhancement is mainly to reduce the overfitting phenomenon of the network, which means that the hypothesis becomes too strict to obtain consistent hypothesis, and avoid overfitting being a core task in the classifier design, one hypothesis can obtain better fitting than other hypotheses on the training data, but the data cannot be well fitted on the data set outside the training data, and the assumption is considered to be overfitting. The data enhancement method comprises the following steps: rotation, affine transformation, perspective transformation, erosion dilation, HSV perturbation, gamma perturbation, etc., to simulate illumination variations and angle variations. And (3) forming picture data by the visual components subjected to data enhancement and the visual components drawn in advance, labeling the picture data, dividing the data set into a test set and a training set according to the comparison of 1:3 after the data set is subjected to data enhancement, storing the test set and the training set in a file system, and training the picture data in an off-line manner to obtain a visual component classification model. Inputting the first image into a visual component classification model, performing region division on the first image through the visual component classification model to obtain the category of the visual component in each region, packaging the category result, feeding back whether the category is correct by a user, forming the feedback result and the picture data into full image data together, and performing online training on the visual component classification model to obtain the visual component classification model with better generalization capability. The offline training of the visual component classification model is performed in a server, and there are many models for target detection, for example, Region-based RCNN detection (RCNN), Single-Shot multi-box Detector (SSD), You Only see Once (YOLO), and the like may be used. And inputting the first image into the visual component classification model, and identifying the first image through the trained visual component classification model to obtain the category of the visual component in the first image.
It should be noted that, when labeling the category of the visual component, an area where the characteristic of the visual component is obvious may be labeled, and the visual components with unobvious characteristics (for example, the picture visual component and the advertisement visual component) are unified as a placeholder, where the placeholder refers to a fixed position that is occupied first and an appropriate visual component is added at the position, as shown in fig. 9, the left side is an example of the picture visual component, and the right side is an example of the advertisement visual component.
It should be noted that, referring to fig. 5, the server may include a recognition server and a GPU server, wherein the GPU server may be configured to perform training and updating of the visual component classification model, so as to send the visual component classification model to the recognition server for use.
In step 403, the position of the visual component may also be normalized, so as to obtain the normalized position of the visual component, so as to ensure that the shape of the obtained visual component is regular and is suitable for the screen display. The step of normalizing the position of the visual component may specifically include any one of the following steps:
4031. the boundaries of the visual elements are adjusted to be a regular quadrilateral.
The adjustment may be to adjust the height and width of the boundaries of the visual component to obtain the boundaries of the visual component shaped as a regular quadrilateral.
Taking the adjustment process of fig. 10 as an example, as shown in fig. 10, the height of the boundary of the visual component is adjusted, the number of melanin in the boundary of the visual component is detected from left to right, when the number of the right-side sum pigments is greater than the left-side sum pigments, the rightward detection is continued, when the number of the melanin on the right-side is less than the left-side sum pigments, the adjustment of the height of the boundary of the visual component is triggered, as shown in fig. 10, when the number of the melanin on the right-side is first smaller than the number of the melanin on the left-side sum pigments, the height of the boundary A, B, D of the visual component is adjusted according to a certain ratio, the adjustment is marked as adjusted, when the number of the melanin on the right-side is second smaller than the number of the melanin on the left-side sum pigments, as the boundary B, the heights of the boundary C of the visual component and the boundary E of the visual component are adjusted in proportion below the boundary A of the visual component. The adjustment of the width of the boundary of the visual component is similar to the adjustment of the height of the boundary of the visual component, and is not described in detail herein.
4032. And stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas.
In step 403, stretching the boundary of the visual component, where the stretched boundary of the visual component is filled with canvas includes the specific steps of:
and stretching the boundaries of the visual components in sequence from small to large according to the coordinate information of the boundaries of the visual components, wherein the stretching sequence is upwards, leftwards, downwards and rightwards, and the stretching is carried out until the boundaries of other adjacent visual components or the boundaries of the whole image, and as shown in fig. 11, the boundaries of the visual components are stretched schematically.
4033. When there is an area of coverage between any two visual components, the coverage between any two visual components is eliminated.
In order to ensure the adaptation to the screen display, whether coverage exists between the target areas can be determined according to the positions of the target areas in the image, and when the coverage exists, the positions of the target areas are adjusted until the adjusted target areas do not overlap with each other. The specific method comprises the following steps:
when there is a coverage area between any two target areas and the coverage is relative coverage, that is, there is an overlapping area between the two target areas with overlapping length smaller than 1/3 of their respective widths on the X axis or the Y axis, as shown in fig. 12, where there is an overlapping area between target area B and target area E, vertex coordinates 1 and 2 of target area B are obtained, vertex coordinates 5 and 6 of target area E are obtained, and relative coverage length L, L ═ X is obtained1-X3Moving the target area B to the left by a certain distance and moving the target area E to the right by a certain distance, wherein the certain distance is L/2, so that the target area B and the target area E have no overlapping area on the X axis. And detecting the relative coverage of each target area in the second image once to ensure that no relative coverage exists between any two target areas.
When there is a coverage area between any two target areas and the coverage is absolute coverage, that is, there is a common pixel between two target areas with absolute coverage, such as target area B and target area C in fig. 12, vertex coordinates 1 and 2 of target area B are obtained, vertex coordinates 3 and 4 of target area C are obtained, and coverage length L is obtained, where L is X1-X2Moving the target area B to the left by a certain length, and moving the target area C to the right by a certain length, wherein the certain length is a coverage length of L/2, so that no common pixel exists between the target area B and the target area C. And carrying out detection of absolute coverage once on each target area in the second image, and ensuring that absolute coverage does not exist between any two target areas.
It should be noted that after any coverage elimination operation is completed, the boundary of the target area may be adjusted until the boundary is adjusted to be a regular quadrangle, and the specific operation is consistent with the operation of adjusting the boundary of the target area to be a regular quadrangle, which is not described herein again.
It should be noted that, the above process is described by taking the process of the second image as an example, and when the steps 402 and 403 are not needed, the process of the first image may be performed in the same manner, which is not described herein again.
In the embodiment of the disclosure, after obtaining the position and the category of the visual component, the color of at least one of the visual component and the first image is further identified, that is, the method may further include any one of the following steps: (1) extracting the color of the visual component in the first image; (2) the background color of the first image is extracted.
Taking the example of extracting the color of the visual component and the background color of the first image as an example, the extracting the color of the visual component in the first image and the extracting the background color of the first image may be implemented by any one of the following manners:
the method comprises the steps of firstly, extracting the color of the visual component and the background color of the first image based on the HSV model.
The first mode specifically includes: based on HSV values corresponding to pixels in the first image, pixels with HSV values within a preset range are obtained through screening, the HSV values of the pixels obtained through screening are determined to be the color of the visual component, and the HSV values of the pixels in the edge area in the first image are determined to be the background color of the first image.
The step of the first mode may include the following steps: converting the first image into a first matrix, wherein the first matrix comprises a plurality of triples, and each triplet in the first matrix is used for representing an HSV (hue, saturation and value) value of a pixel point in the first image; extracting a second matrix and a third matrix from the first matrix, wherein the second matrix comprises H values in all the triples, and the third matrix comprises V values in all the triples; and screening pixel points with H values larger than a first preset value in the second matrix and V values smaller than the first preset value in the third matrix, superposing the second matrix and the third matrix to obtain a single-channel image, counting the pixel points in the single-channel image, and taking HSV values of the pixel points with the most occurrence times as background colors of the visual component. And screening pixel points with V values larger than a first preset value in the third matrix, counting the pixel points, and taking the HSV value of the pixel point with the largest occurrence frequency as the foreground of the visual component. And obtaining pixel points in the edge region in the first image through screening, calculating the average value of HSV of the pixel points in the edge region, and determining the average value as the background color of the first image.
For example, a first image is represented by an HSV model to obtain a first matrix, an H matrix (i.e., a second matrix) and a V matrix (i.e., a third matrix) in the first matrix are extracted, pixels greater than 100 (i.e., a first preset value) in the H matrix and pixels less than 100 (i.e., a first preset value) in the V matrix are screened, the H matrix and the V matrix are superimposed to obtain a single-channel image, the pixels in the single-channel image are counted, and the HSV value of the pixel with the largest occurrence frequency is used as the background color of the visual component. And screening pixel points which are more than 100 (namely a first preset value) in the V matrix, counting the pixel points, and taking the HSV value of the pixel point with the largest occurrence frequency as the foreground of the visual component. And obtaining pixel points in the edge area of the first image through screening, calculating the average value of the pixel points HSV in the edge area, and determining the average value as the background color of the first image.
And secondly, extracting the color of the visual component and the background color of the first image based on the RGB model.
The second mode specifically includes: based on the RGB values corresponding to the pixels in the first image, the pixels with the RGB values within the preset range are obtained through screening, the RGB values of the pixels obtained through screening are determined as the color of the visual component, and the RGB values of the pixels in the edge area in the first image are determined as the background color of the first image.
Wherein, the step of the second mode may include the following steps: converting the first image into a fourth matrix, wherein the fourth matrix comprises a plurality of triples, and each triplet in the fourth matrix is used for representing the RGB value of one pixel point in the first image; extracting a fifth matrix and a sixth matrix from the fourth matrix, wherein the fifth matrix comprises pixel points of which the RGB values are smaller than a first preset value, and the sixth matrix comprises pixel points of which the RGB values are larger than a second preset value; and counting the pixel points in the fifth matrix, taking the RGB value of the pixel point with the most occurrence times as the background color of the visual component, counting the pixel points in the sixth matrix, and taking the RGB value of the pixel point with the most occurrence times as the foreground color of the visual component. And obtaining an edge region in the first image through screening, calculating an average value of RGB of pixel points in the edge region, and determining the average value as the background color of the first image.
For example, the first image is represented by an RGB model to obtain a fourth matrix, a fifth matrix and a sixth matrix in the fourth matrix are extracted, the fifth matrix includes pixels whose RGB values are both less than 100 (i.e., a first preset value), the sixth matrix includes pixels whose RGB values are both greater than 120 (i.e., a second preset value), the pixels in the fifth matrix are counted, the RGB value of the pixel with the highest occurrence frequency is used as a background color of the visual component, the pixels in the sixth matrix are counted, and the RGB value of the pixel with the highest occurrence frequency is used as a foreground color of the visual component. And obtaining an edge region of the first image through screening, calculating an average value of RGB (red, green and blue) pixel points in the edge region, and determining the average value as the background color of the first image.
The HSV model is adopted to identify the color of the visual components, and compared with the identification of RGB (R is red, G is green and B is blue), because the pixel points in each color channel in the pixel representation of RGB are all on [0,255], the foreground colors of the two visual components are always light tones, and the channel in which the light tones exist is difficult to determine in the RGB channel, the color identification of the visual components is inaccurate.
It should be noted that, when extracting the color of the visual component, any one of the two methods for extracting the color of the visual component may be selected, or the two methods may be sequentially used, and when the color of the visual component cannot be extracted by a previous extraction method, the color of the visual component is extracted by a subsequent extraction method, which is not limited in the embodiment of the present disclosure.
Fig. 13 is a schematic diagram of a processing procedure of the first image, which includes the above-mentioned extraction procedure of the color of the visible component in the first image, the identification procedure of the position, and the extraction procedure of the background color of the first image. And identifying the color of the visual component in the first image, wherein the identification method comprises the steps of identifying the color based on an HSV model and the color based on an RGB model to obtain the color of the visual component and the background color of the first image, carrying out standardization treatment on the first image to obtain the position information of the visual component, and packaging the color and the position information of the visual component into metadata of the visual component for automatic arrangement of the visual component subsequently so as to generate the picture layout of a target screen.
404. The server sends the category and location of the visual component to the computer device.
The server may compose metadata for the screen layout based on the resulting category and location of the visual component, which may include a component category attribute and a coordinate attribute, which may be JSON format metadata. For example, taking a pie chart as an example, the metadata and the field meanings are respectively as follows:
res _ message return message
res _ code return code
result body
module _ num, the number of visual components parsed
photo _ w image width
photo _ h image height
pie _ m visual component class, here a multi-pie chart
x initial x coordinate of visual component
y initial y coordinate of visual component
w visual component width
h visual component height
x _ percent visual element boundary start x coordinate percent
y _ percent visual component boundary Start y coordinate percent
w _ percentage visual element boundary width percentage
h _ percent visible component boundary height percentage
index value of ind
score classification result score
In the embodiment of the present disclosure, when the server may further send the color of the visual component and the background color of the first image to the computer device, for example, taking a pie chart as an example, the field meanings of the color of the visual component and the background color of the first image are as follows:
m _ back _ clr: visual component background color
m _ front _ clr: visual component foreground
back _ clr: background color of first image
405. The computer device receives the category and the position of the visual component, creates the visual component in the canvas according to the category and the position of the visual component, and obtains the picture layout of the target screen.
In step 405, the computer device transmits the received metadata to the layout engine, and the layout engine parses the metadata and lays out the layout of the entire large screen, the position, the category, and the color of the visual component according to the parsed result of the metadata, to obtain the screen layout of the target screen.
In the embodiment of the disclosure, the computer device receives the position, the category and the color of the visual component sent by the server, feeds back whether the result of the classification of the visual component in the first image is correct based on the classification model of the visual component in the server, and sends the feedback result to the server, where the feedback result includes, but is not limited to, the category of the visual component, the index value of the prediction of the category of the visual component, and the accuracy rate of the prediction of the category of the visual component, as shown in fig. 14, which is a feedback diagram of the visual component. And the server updates the visual component classification model according to the feedback result of the computer equipment, and the server distributes the model to each prediction server after acquiring the updated visual component classification model, so that the reliability of the visual component classification model is ensured. The server can package feedback results of the visual components sent by the computer equipment into an incremental visual component library, and the incremental visual component library and the basic visual component library form a full-volume visual component library together, wherein the full-volume visual component library is used for identifying the category of the visual components in the module by a subsequent visual component classification model.
It should be noted that the canvas may be created before the computer device sends the first image to the server, or after the computer device receives the boundary information of the module and the category of the visual component, which is not limited in this disclosure.
406. The computer device displays a picture layout of the target screen.
In step 406, the target screen layout is displayed on the computer device. Further, the computer device is configured with service data, and the service data can be displayed in a screen layout of the target screen in association with the service data, where the service data refers to data owned by the user, such as power quantity data of a power supply office, data of a doctor in a hospital, and the like.
In the embodiment of the present disclosure, after the computer device displays the screen layout of the target screen, if the computer device user finds that the screen layout is inconsistent with the expected implementation after viewing the screen layout, the screen layout may be adjusted based on the screen design drawing, and the screen layout after adjustment is displayed, and if the screen layout after adjustment is still inconsistent with the expected implementation, the screen design drawing is designed again.
According to the method provided by the embodiment of the disclosure, the computer equipment sends the first image to the server, the server identifies the first image to obtain the category and the position of the visual component in the first image, and sends the category and the position of the visual component to the computer equipment. Further, for the design drawing with complete description of the visual components, the colors of the visual components can be identified, so that the generated screen layout is colored, and the waste of time cost can be reduced.
Referring to fig. 15, the fig. 15 is an interaction flowchart of a data processing method according to an exemplary embodiment, which is different from the above-mentioned embodiment shown in fig. 4 in which a computer device and a server complete a generation process of a whole screen layout through interaction, the data processing method corresponding to fig. 15 is to complete processes of image acquisition, image recognition, subsequent display, and the like by the computer device alone, so as to complete the generation process of the screen layout, and specifically includes the following steps:
1501. the computer device obtains a first image, which is an image of a screen plan.
For the case where the computer device is provided with image recognition capability, the computer device may perform the subsequent step 1502 directly after acquiring the first image without sending the first image to the server.
1502. The computer equipment identifies the visual components in the first image to obtain the category and the position of the visual components in the first image.
Step 1502 is similar to step 402 described above, and will not be described herein again.
In step 1502, after identifying the visual components in the first image and obtaining the categories of the visual components, the computer device feeds back whether the classification results of the visual components are correct or not based on the visual component classification model, and sends the feedback results to the visual component classification model, where the feedback results include, but are not limited to, the categories of the visual components, target areas corresponding to the visual components, index values of visual component category predictions, and accuracy rates of the visual component category predictions, and the visual component classification model updates the visual component classification model based on the feedback results, thereby ensuring reliability of the visual component classification model.
1503. The computer device creates the visual component in the canvas according to the category and the position of the visual component, and obtains the picture layout of the target screen.
In step 1503, the computer device can directly create the visual component according to the category and the position of the visual component in the canvas according to the category and the position of the visual component without the server participating in the generation process of the screen layout of the target screen, thereby obtaining the screen layout of the target screen.
1504. The computer device displays a picture layout of the target screen.
In the method provided by the embodiment of the disclosure, a computer device acquires a first image, identifies visual components in the first image to obtain the category and the position of the visual components, and creates the visual components in a canvas according to the category and the position of the visual components to obtain the picture layout of a target screen. The process can directly identify the image containing the design drawing, so that the position and the category of the visual component in the image are obtained, the screen layout is generated based on the identification result, manual operation of a user is not needed, and waste of time cost is reduced. Further, for the design drawing with complete description of the visual components, the colors of the visual components can be identified, so that the generated screen layout is colored, and the waste of time cost can be reduced.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 16 is a diagram of a data processing apparatus according to an embodiment of the present disclosure. Referring to fig. 16, the apparatus includes:
a receiving module 1601, configured to perform the step 403, where the receiving module 1601 is further configured to perform the step 405;
an identifying module 1602, configured to perform the step 403;
a sending module 1603 for executing the step 402, and the sending module 1603 for executing the step 404.
In a possible implementation manner, the identifying module 1602 is configured to perform the above step 403, input the first image into a visual component classification model, and perform target detection on a plurality of candidate regions in the first image through the visual component classification model to obtain a category and a position of a visual component.
In a possible implementation manner, the receiving module 1601 is further configured to update the visual component classification model according to a feedback result when the feedback result of the computer device on the category of the visual component in the first image is received in the step 404.
In a possible implementation manner, the apparatus further includes an extraction module, configured to perform at least one of the following in step 403:
extracting the color of the visual component in the first image;
the background color of the first image is extracted.
In one possible implementation, the apparatus further includes:
and a normalizing module, configured to perform normalization on the position of the visual component in step 403.
In one possible implementation, the normalization module is configured to perform at least one of the following steps 403:
adjusting the boundary of the visual component into a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with canvas;
when there is an area of coverage between any two visual components, the coverage between any two visual components is eliminated.
In one possible implementation, the apparatus further includes:
and a noise reduction module, configured to perform noise reduction processing on the first image in step 403.
In one possible implementation, the apparatus further includes:
an obtaining module, configured to execute the computer device in step 401 to obtain a first image; the acquiring module is further configured to perform the above step 401 to obtain a second image based on the effective area of the first image.
According to the device, the computer equipment sends the first image to the server, the server identifies the first image to obtain the category and the position of the visual component in the first image, and sends the category and the position of the visual component to the computer equipment. Further, for the design drawing with complete description of the visual components, the colors of the visual components can be identified, so that the generated screen layout is colored, and the waste of time cost can be reduced.
Fig. 17 is a diagram of a data processing apparatus according to an embodiment of the present disclosure. Referring to fig. 17, the apparatus includes:
an obtaining module 1701 for executing the step 1501;
an identifying module 1702 for performing the above step 1502;
a creating module 1703 for executing the above step 1503.
In a possible implementation manner, the identification module is configured to perform the step 1502 to input the first image into a visual component classification model, and perform object detection on a plurality of candidate regions in the first image through the visual component classification model to obtain a category and a position of the visual component.
In one possible implementation, the apparatus further includes:
a sending module, configured to execute the step 1502, when a feedback result of the category of the visual component in the first image is received, send the feedback result to a server.
In a possible implementation manner, the apparatus further includes an extraction module, configured to perform at least one of the following steps 1502:
extracting the color of the visual component in the first image;
the background color of the first image is extracted.
In one possible implementation, the apparatus further includes:
a normalization module, configured to perform normalization on the position of the visual component in step 1502;
in one possible implementation, the normalization module is configured to perform at least one of the following steps 1502:
the boundary of the visual component is adjusted to be a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with canvas;
when there is an area of coverage between any two visual components, the coverage between any two visual components is eliminated.
In one possible implementation, the apparatus further includes:
and a denoising module, configured to perform denoising processing on the first image in step 1502.
In one possible implementation, the apparatus further includes:
and an image processing module, configured to perform the above step 1501 to obtain a second image based on the effective area of the first image.
According to the device, the computer equipment acquires the first image, the visible components in the first image are identified to obtain the types and the positions of the visible components, and the visible components are created in the canvas according to the types and the positions of the visible components, so that the picture layout of the target screen is obtained. The process can directly identify the image containing the design drawing, so that the position and the category of the visual component in the image are obtained, the screen layout is generated based on the identification result, manual operation of a user is not needed, and waste of time cost is reduced. Further, for the design drawing with complete description of the visual components, the colors of the visual components can be identified, so that the generated screen layout is colored, and the waste of time cost can be reduced.
It should be noted that: in the data processing apparatus provided in the foregoing embodiment, when data is processed, the division of each functional module is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the data processing apparatus and the data processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor to perform the data processing method in the above-described embodiments is also provided. For example, the computer-readable storage medium may be a read-only memory (ROM), a Random Access Memory (RAM), a compact disc-read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is meant to be illustrative of the principles of the present disclosure and not to be taken in a limiting sense, and any modifications, equivalents, improvements and the like that are within the spirit and scope of the present disclosure are intended to be included therein.

Claims (36)

1. A method of data processing, the method comprising:
acquiring a first image, wherein the first image is an image of a screen design drawing;
identifying the visual components in the first image to obtain the category and the position of the visual components in the first image;
and according to the category and the position of the visual component, creating the visual component in the canvas to obtain the picture layout of the target screen.
2. The method of claim 1, wherein the identifying visual components in the first image, and obtaining the category and location of visual components in the first image comprises:
and inputting the first image into a visual component classification model, and performing target detection on a plurality of candidate areas in the first image through the visual component classification model to obtain the category and the position of the visual component.
3. The method of claim 2, wherein after the target detection of the candidate regions in the first image by the visual component classification model, the category and the position of the visual component are obtained, the method further comprises:
when a feedback result of the category of the visual component in the first image is received, the feedback result is sent to a server.
4. The method of claim 1, further comprising at least one of:
extracting colors of visual components in the first image;
extracting a background color of the first image;
the creating of the visual component in the canvas according to the category and the position of the visual component and the obtaining of the picture layout of the target screen comprise:
and creating the visual component in the canvas according to the extracted color and the category and the position of the visual component to obtain the picture layout of the target screen.
5. The method of claim 1, further comprising:
normalizing the location of the visual component;
the creating of the visual component in the canvas according to the category and the position of the visual component and the obtaining of the picture layout of the target screen comprise:
and according to the category and the normalized position of the visual component, creating the visual component in the canvas to obtain the picture layout of the target screen.
6. The method of claim 5, wherein the normalizing the location of the visual component comprises at least one of:
adjusting the boundary of the visual component into a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas;
when there is an area of coverage between any two visual components, the coverage between the any two visual components is eliminated.
7. The method of claim 1, further comprising:
and carrying out noise reduction processing on the first image.
8. The method of claim 1, wherein the identifying the first image for the category and location of the visual component comprises:
obtaining a second image based on the effective area of the first image, wherein the second image comprises image content in the effective area;
and identifying the second image to obtain the category and the position of the visual component.
9. The method according to claim 8, wherein the deriving a second image based on the effective area of the first image comprises any one of:
clipping the effective area of the first image to obtain a second image;
and cutting the effective area of the first image to obtain an effective area image, and stretching the effective area image to obtain a second image.
10. A method of data processing, the method comprising:
receiving a first image sent by computer equipment, wherein the first image is an image of a screen design drawing;
identifying the visual components in the first image to obtain the category and the position of the visual components in the first image;
sending the category and location of the visual component to the computer device.
11. The method of claim 10, wherein the identifying visual components in the first image, and obtaining the category and location of visual components in the first image comprises:
and inputting the first image into a visual component classification model, and performing target detection on a plurality of candidate areas in the first image through the visual component classification model to obtain the category and the position of the visual component.
12. The method of claim 11, wherein after the target detection of the candidate regions in the first image by the visual component classification model, the category and the position of the visual component are obtained, the method further comprises:
and when a feedback result of the computer equipment on the category of the visual component in the first image is received, updating the visual component classification model according to the feedback result.
13. The method of claim 10, further comprising at least one of:
extracting colors of visual components in the first image;
extracting a background color of the first image;
the sending the category and location of the visual component to the computer device comprises:
sending the extracted color and the category and location of the visual component to the computer device.
14. The method of claim 10, further comprising:
normalizing the location of the visual component;
the sending the category and location of the visual component to the computer device comprises:
and sending the normalized category and the position of the visual component to the computer equipment.
15. The method of claim 14, wherein the normalizing the location of the visual component comprises at least one of:
adjusting the boundary of the visual component into a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas;
when there is an area of coverage between any two visual components, the coverage between the any two visual components is eliminated.
16. The method of claim 10, further comprising:
and carrying out noise reduction processing on the first image.
17. The method of claim 10, wherein the identifying the first image for the category and location of the visual component comprises:
obtaining a second image based on the effective area of the first image, wherein the second image comprises image content in the effective area;
and identifying the second image to obtain the category and the position of the visual component.
18. The method according to claim 17, wherein the deriving a second image based on the effective area of the first image comprises any one of:
clipping the effective area of the first image to obtain a second image;
and cutting the effective area of the first image to obtain an effective area image, and stretching the effective area image to obtain a second image.
19. A data processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first image;
the identification module is used for identifying the visual component in the first image to obtain the category and the position of the visual component in the first image;
and the creating module is used for creating the visual components in the canvas according to the categories and the positions of the visual components to obtain the picture layout of the target screen.
20. The apparatus of claim 19, wherein the recognition module is configured to input the first image into a visual component classification model, and perform object detection on a plurality of candidate regions in the first image through the visual component classification model to obtain the category and the location of the visual component.
21. The apparatus of claim 19, further comprising:
and the sending module is used for sending the feedback result to a server when receiving the feedback result of the category of the visual component in the first image.
22. The apparatus of claim 19, further comprising an extraction module configured to perform at least one of:
extracting colors of visual components in the first image;
extracting a background color of the first image;
the creating module is used for creating the visual component in the canvas according to the extracted color and the category and the position of the visual component to obtain the picture layout of the target screen.
23. The apparatus of claim 19, further comprising:
the normalization module is used for normalizing the position of the visual component;
the creation module is used for creating the visual components in the canvas according to the categories and the normalized positions of the visual components to obtain the picture layout of the target screen.
24. The apparatus of claim 23, wherein the normalization module is configured to perform at least one of:
the boundary of the visual component is adjusted to be a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas;
when there is an area of coverage between any two visual components, the coverage between the any two visual components is eliminated.
25. The apparatus of claim 19, further comprising:
and the noise reduction module is used for carrying out noise reduction processing on the first image.
26. The apparatus of claim 19, further comprising an image processing module configured to perform the steps of: and obtaining a second image based on the effective area of the first image.
27. A data processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a first image sent by computer equipment;
the identification module is used for identifying the first image to obtain the category and the position of the visual component;
a sending module for sending the category and location of the visual component to the computer device.
28. The apparatus of claim 27, wherein the recognition module is configured to input the first image into a visual component classification model, and perform object detection on a plurality of candidate regions in the first image through the visual component classification model to obtain the category and the location of the visual component.
29. The apparatus of claim 27, wherein the receiving module is further configured to update the visual component classification model according to a feedback result of the computer device on the category of the visual component in the first image.
30. The apparatus of claim 27, further comprising an extraction module configured to perform at least one of:
extracting colors of visual components in the first image;
extracting a background color of the first image;
and the sending module is used for sending the extracted color and the category and the position of the visual component to the computer equipment.
31. The apparatus of claim 27, further comprising:
the normalization module is used for normalizing the position of the visual component;
the sending module is used for sending the category and the position of the normalized visual component to the computer equipment.
32. The apparatus of claim 31, wherein the normalization module is configured to perform at least one of:
adjusting the boundary of the visual component into a regular quadrangle;
stretching the boundary of the visual component, wherein the stretched visual component is filled with the canvas;
when there is an area of coverage between any two visual components, the coverage between the any two visual components is eliminated.
33. The apparatus of claim 27, further comprising:
and the noise reduction module is used for carrying out noise reduction processing on the first image.
34. The apparatus of claim 27, further comprising:
and the acquisition module is used for obtaining a second image based on the effective area of the first image.
35. A computer device comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the data processing method of any one of claims 1 to 9 or 10 to 18.
36. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the data processing method of any one of claims 1 to 9 or 10 to 18.
CN201910907266.7A 2019-09-24 2019-09-24 Data processing method, data processing device, computer equipment and computer readable storage medium Pending CN112633038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910907266.7A CN112633038A (en) 2019-09-24 2019-09-24 Data processing method, data processing device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910907266.7A CN112633038A (en) 2019-09-24 2019-09-24 Data processing method, data processing device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112633038A true CN112633038A (en) 2021-04-09

Family

ID=75282830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910907266.7A Pending CN112633038A (en) 2019-09-24 2019-09-24 Data processing method, data processing device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112633038A (en)

Similar Documents

Publication Publication Date Title
RU2680765C1 (en) Automated determination and cutting of non-singular contour of a picture on an image
CN110705583A (en) Cell detection model training method and device, computer equipment and storage medium
CN108229485B (en) Method and apparatus for testing user interface
DE112020005377B4 (en) SYSTEMS AND METHODS FOR AUTOMATIC CAMERA INSTALLATION GUIDANCE
US20170039723A1 (en) Image Object Segmentation Using Examples
EP3142045B1 (en) Predicting accuracy of object recognition in a stitched image
US20150071532A1 (en) Image processing device, computer-readable recording medium, and image processing method
US11694331B2 (en) Capture and storage of magnified images
CN113962306A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112396050B (en) Image processing method, device and storage medium
WO2023024726A1 (en) Security check ct object recognition method and apparatus
CN114511820A (en) Goods shelf commodity detection method and device, computer equipment and storage medium
US11704807B2 (en) Image processing apparatus and non-transitory computer readable medium storing program
CN108229270B (en) Method, device and electronic equipment for identifying road from remote sensing image
CN113487473A (en) Method and device for adding image watermark, electronic equipment and storage medium
CN112883926A (en) Identification method and device for table medical images
CN117115358A (en) Automatic digital person modeling method and device
CN112633038A (en) Data processing method, data processing device, computer equipment and computer readable storage medium
US11972593B2 (en) System and methods for quantifying uncertainty of segmentation masks produced by machine learning models
CN113592807B (en) Training method, image quality determining method and device and electronic equipment
CN115345895A (en) Image segmentation method and device for visual detection, computer equipment and medium
US11468658B2 (en) Systems and methods for generating typographical images or videos
CN116630139A (en) Method, device, equipment and storage medium for generating data
CN116862920A (en) Portrait segmentation method, device, equipment and medium
CN114758384A (en) Face detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination