CN112529106A - Method, device and equipment for generating visual design manuscript and storage medium - Google Patents

Method, device and equipment for generating visual design manuscript and storage medium Download PDF

Info

Publication number
CN112529106A
CN112529106A CN202011578001.6A CN202011578001A CN112529106A CN 112529106 A CN112529106 A CN 112529106A CN 202011578001 A CN202011578001 A CN 202011578001A CN 112529106 A CN112529106 A CN 112529106A
Authority
CN
China
Prior art keywords
visual
design
data set
generating
prior frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011578001.6A
Other languages
Chinese (zh)
Inventor
门玉玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202011578001.6A priority Critical patent/CN112529106A/en
Publication of CN112529106A publication Critical patent/CN112529106A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to the technical field of artificial intelligence, and discloses a method, a device, equipment and a storage medium for generating a visual design manuscript, which are used for solving the problem that the requirement of systematic analysis cannot be met and improving the accuracy of generating the visual manuscript meeting the interaction requirement. The method for generating the visual design manuscript comprises the following steps: acquiring a visual design completion data set, and performing element identification on the visual design completion data set to generate a plurality of visual elements and a plurality of prior frame parameter sets; constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set; performing model training according to the multi-type visual element data sets to generate a visual original model; and acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and a visual manuscript model. In addition, the invention also relates to a block chain technology, and the visual design completion data set can be stored in the block chain.

Description

Method, device and equipment for generating visual design manuscript and storage medium
Technical Field
The present invention relates to the field of machine learning technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating a visual design document.
Background
In the current industry, interface designs are often divided into interactive designs and visual designs. Visual design is a manifestation of subjective forms of eye function and results. Visual transmission designs are part of visual designs that are primarily intended to be represented by the object being transmitted, i.e., the viewer, and lack appeal for the designer's own visual need factors. Since visual transmission is transmitted to both the visual audience and the designer, intensive research on visual transmission has focused on the aspect of visual perception, which is called more appropriate visual design.
In the prior art, visual design is usually a visual design manuscript produced manually by a visual designer according to an interactive design draft given by the interactive designer, and a few companies also construct the visual design manuscript according to an interactive requirement or the interactive design draft by using artificial intelligence.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for generating a visual design manuscript, which are used for solving the problem that the requirement of systematic analysis cannot be met and improving the accuracy of generating the visual manuscript meeting the interaction requirement.
The invention provides a method for generating a visual design manuscript, which comprises the following steps: acquiring a visual design completion data set, performing element identification on the visual design completion data set, and generating a plurality of visual elements and a plurality of prior frame parameter sets, wherein the visual design completion data set is a picture data set; constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set; performing model training according to the multi-type visual element data sets to generate a visual original model; and acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and the visual manuscript model.
Optionally, in a first implementation manner of the first aspect of the present invention, the obtaining a visual design completion data set, and performing element identification on the visual design completion data set to generate a plurality of visual elements and a plurality of prior frame parameter sets, where the visual design completion data set is a picture data set, includes: acquiring a visual design completion data set, wherein the visual design completion data set is a picture data set; performing feature extraction on each visual design completion data in the visual design completion data set by adopting an image recognition algorithm to generate a visual design completion feature set; calculating the visual design completion feature set to generate a plurality of visual design completion prior frame sets and a plurality of prior frame parameter sets, wherein the plurality of visual design completion prior frame sets correspond to the plurality of prior frame parameter sets one by one; generating a plurality of visual elements based on the plurality of prior box sets and the plurality of prior box parameter sets.
Optionally, in a second implementation manner of the first aspect of the present invention, the generating a plurality of visual elements based on the plurality of prior frame sets of visual design completion and the plurality of prior frame parameter sets includes: respectively reading a plurality of confidence coefficient parameter sets from the plurality of prior frame parameter sets, wherein the confidence coefficient parameter sets correspond to the prior frame sets one by one; and screening the plurality of prior frame sets based on the plurality of confidence coefficient parameter sets by adopting a non-maximum suppression algorithm to generate a plurality of visual elements.
Optionally, in a third implementation manner of the first aspect of the present invention, the constructing a data set according to the multiple visual elements and the corresponding prior frame parameter sets, and generating a multi-class visual element data set includes: marking the plurality of visual elements to generate a plurality of visual element marks; and classifying the visual design completion data set by combining the plurality of visual element marks and the corresponding prior frame parameter sets to generate a multi-class visual element data set.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the classifying the visual design completion data set in combination with the multiple visual element labels and the corresponding prior frame parameter sets, and generating a multi-class visual element data set includes: reading a plurality of position information parameters from a plurality of prior frame parameter sets corresponding to the plurality of visual element markers, the plurality of position information parameters corresponding to the plurality of visual element markers one-to-one; and dividing the visual design completion data with the same position information parameters and the same visual element marks into a class of visual element data sets to obtain a plurality of classes of visual element data sets.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file, and the visual manuscript model includes: acquiring an interactive design draft, and judging whether a preset design specification configuration file comprises identity identification information or not, wherein the interactive design draft is a design draft which is processed in advance according to interactive requirements; if the design specification configuration file comprises identity recognition information, reading a historical visual manuscript matched with the identity recognition information from a database; inputting the interactive design draft into the visual manuscript model, and generating at least two target visual manuscripts by referring to the historical visual manuscripts; and if the design specification configuration file does not contain identity identification information, generating at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
Optionally, in a sixth implementation manner of the first aspect of the present invention, if the design specification configuration file does not include the identification information, generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file, and the visual manuscript model includes: inputting the interactive design draft into the visual manuscript model to generate at least two initial visual manuscripts; performing standard processing on the at least two initial visual originals by combining a preset design specification configuration file to generate at least two visual originals meeting the specification; and performing style configuration on the at least two visual manuscripts meeting the specification by combining a preset style configuration file to generate at least two target visual manuscripts.
A second aspect of the present invention provides an apparatus for generating a visual design original, comprising: the acquisition module is used for acquiring a visual design completion data set, performing element identification on the visual design completion data set and generating a plurality of visual elements and a plurality of prior frame parameter sets, wherein the visual design completion data set is a picture data set; the construction module is used for constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set; the training module is used for carrying out model training according to the multi-type visual element data sets to generate a visual original model; and the generating module is used for acquiring the interactive design draft and generating at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
Optionally, in a first implementation manner of the second aspect of the present invention, the obtaining module includes: an acquisition unit, configured to acquire a visual design completion dataset, where the visual design completion dataset is a picture dataset; the characteristic extraction unit is used for extracting the characteristics of each visual design completion data in the visual design completion data set by adopting an image recognition algorithm to generate a visual design completion characteristic set; the calculation unit is used for calculating the visual design completion feature set to generate a plurality of visual design completion prior frame sets and a plurality of prior frame parameter sets, wherein the plurality of visual design completion prior frame sets correspond to the plurality of prior frame parameter sets in a one-to-one manner; a first generating unit, configured to generate a plurality of visual elements based on the plurality of prior frame parameter sets and the plurality of prior frame parameter sets.
Optionally, in a second implementation manner of the second aspect of the present invention, the first generating unit may be further specifically configured to: respectively reading a plurality of confidence coefficient parameter sets from the plurality of prior frame parameter sets, wherein the confidence coefficient parameter sets correspond to the prior frame sets one by one; and screening the plurality of prior frame sets based on the plurality of confidence coefficient parameter sets by adopting a non-maximum suppression algorithm to generate a plurality of visual elements.
Optionally, in a third implementation manner of the second aspect of the present invention, the building module includes: the marking unit is used for marking the plurality of visual elements to generate a plurality of visual element marks; and the classification unit is used for classifying the visual design completion data set by combining the plurality of visual element marks and the corresponding prior frame parameter sets to generate a multi-class visual element data set.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the classification unit may further be specifically configured to: reading a plurality of position information parameters from a plurality of prior frame parameter sets corresponding to the plurality of visual element markers, the plurality of position information parameters corresponding to the plurality of visual element markers one-to-one; and dividing the visual design completion data with the same position information parameters and the same visual element marks into a class of visual element data sets to obtain a plurality of classes of visual element data sets.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the generating module includes: the system comprises a judging unit, a processing unit and a processing unit, wherein the judging unit is used for acquiring an interactive design draft and judging whether a preset design specification configuration file comprises identity identification information or not, and the interactive design draft is a design draft which is processed in advance according to interactive requirements; the reading unit is used for reading a historical visual manuscript matched with the identity identification information from a database if the design specification configuration file comprises the identity identification information; a second generating unit, configured to input the interactive design into the visual original model, refer to the historical visual original, and generate at least two target visual originals; and a third generating unit, configured to generate at least two target visual manuscripts by combining the interactive design script, a preset design specification configuration file, a preset style configuration file, and the visual manuscript model if the design specification configuration file does not include the identification information.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the third generating unit may further be specifically configured to: inputting the interactive design draft into the visual manuscript model to generate at least two initial visual manuscripts; performing standard processing on the at least two initial visual originals by combining a preset design specification configuration file to generate at least two visual originals meeting the specification; and performing style configuration on the at least two visual manuscripts meeting the specification by combining a preset style configuration file to generate at least two target visual manuscripts.
A third aspect of the present invention provides a generation apparatus of a visual design original, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the generation apparatus of the visual design original to execute the above-described generation method of the visual design original.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute the above-described method of generating a visual design original.
According to the technical scheme, a visual design completion data set is obtained, element identification is carried out on the visual design completion data set, a plurality of visual elements and a plurality of prior frame parameter sets are generated, and the visual design completion data set is a picture data set; constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set; performing model training according to the multi-type visual element data sets to generate a visual original model; and acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and the visual manuscript model. In the embodiment of the invention, the visual manuscript model is used, so that the initial visual manuscript meeting the requirement can be accurately generated based on the interactive design manuscript, the design specification configuration file and the style configuration file are used for carrying out specification processing and style configuration on the initial visual manuscript, and the target visual manuscript is generated, thereby solving the problem that the requirement cannot be systematically analyzed, and improving the accuracy of generating the visual manuscript meeting the interactive requirement.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a method for generating a visual design document according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a method for generating a visual design document according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a device for generating a visual design document according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another embodiment of a device for generating a visual design document according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of a device for generating a visual design original according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method, a device, equipment and a storage medium for generating a visual design manuscript, which are used for solving the problem that the requirement of systematic analysis cannot be met and improving the accuracy of generating the visual manuscript meeting the interaction requirement.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below, and referring to fig. 1, an embodiment of a method for generating a visual design document according to an embodiment of the present invention includes:
101. acquiring a visual design completion data set, performing element identification on the visual design completion data set, and generating a plurality of visual elements and a plurality of prior frame parameter sets, wherein the visual design completion data set is a picture data set;
the server acquires a visual design completion data set which is a picture data set, and then performs element identification on visual design completion data in the visual design completion data set, thereby generating a plurality of visual elements and a plurality of corresponding prior frame parameter sets. It is emphasized that the visual design complete data set may also be stored in a node of a blockchain in order to further ensure the privacy and security of the visual design complete data set.
It should be noted that the plurality of visual design completion data constitute a visual design completion data set, and therefore the visual design completion data set includes the plurality of visual design completion data, and each of the visual design completion data exists in a format such as PNG and JPG that does not include an edit layer.
The server acquires a visual design completion data set used for training a visual model, the visual design completion data comprises a plurality of visual elements such as LOGO, icons and the like, the server firstly identifies the visual elements in the visual design completion data, and then trains the recognition model based on the identification result. The server identifies each visual design completion data in the visual design completion data set by adopting a preset object identification and positioning algorithm based on a deep neural network, namely YOLOv4, the identification result comprises a plurality of prior frame parameter sets, the prior frame parameter sets comprise confidence coefficient information, horizontal coordinate information, vertical coordinate information, length information and width information, the server determines a rectangular frame according to the prior frame parameter sets, and then determines a plurality of visual elements according to the rectangular frame and the prior frame parameter sets to obtain a plurality of visual elements.
It is to be understood that the executing subject of the present invention may be a generating apparatus of a visual design document, a terminal or a server, and is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
102. Constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set;
and the server constructs a data set according to the plurality of visual elements and the plurality of prior frame parameter sets to generate a multi-class visual element data set.
After obtaining a plurality of visual elements, the server constructs a data set, and divides visual design completion data with the same visual elements into one type of visual element data, so as to generate a plurality of types of visual element data sets, for example, the visual design completion data set includes visual design completion data a1, visual design completion data a2, visual design completion data A3, visual design completion data a4, visual design completion data a5, and visual design completion data a6, and the server reads a priori frame parameter set corresponding thereto, and classifies the same visual design completion data into one type of visual element data by combining the visual elements corresponding to each piece of visual design completion data and the corresponding priori frame parameter set, thereby generating the plurality of types of visual element data sets.
103. Performing model training according to the multi-type visual element data sets to generate a visual original model;
and the server trains the model according to the multi-class visual element data set to generate a visual manuscript model.
The server divides each type of visual element data into a training element data set, a verification element data set and a test element data set, trains the training element data set by adopting a deep learning model to generate an initial visual manuscript model, then inputs the verification element data set into the initial visual manuscript model for verification, thereby completing parameter adjusting operation, generating the visual manuscript model, and evaluating the generalization ability of the visual manuscript model by adopting the test element data set.
104. And acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and a visual manuscript model.
The server acquires an interactive design draft, wherein the interactive design draft is a design draft generated by preprocessing according to interactive requirements, and then at least two target visual manuscripts are generated by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and a visual manuscript model.
In this embodiment, the design specification configuration file may include identification information, wherein the identification information is added to the design specification configuration file in advance. When the design specification configuration file comprises the identity identification information, the server can refer to the historical visual manuscript matched with the identity identification information to generate at least two target visual manuscripts; and when the design specification configuration file does not comprise the identity identification information, the server generates at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
In the embodiment of the invention, the visual manuscript model is used, so that the initial visual manuscript meeting the requirement can be accurately generated based on the interactive design manuscript, the design specification configuration file and the style configuration file are used for carrying out specification processing and style configuration on the initial visual manuscript, and the target visual manuscript is generated, thereby solving the problem that the requirement cannot be systematically analyzed, and improving the accuracy of generating the visual manuscript meeting the interactive requirement.
Referring to fig. 2, another embodiment of the method for generating a visual design document according to the embodiment of the present invention includes:
201. acquiring a visual design completion data set, performing element identification on the visual design completion data set, and generating a plurality of visual elements and a plurality of prior frame parameter sets, wherein the visual design completion data set is a picture data set;
the server acquires a visual design completion data set which is a picture data set, and then performs element identification on visual design completion data in the visual design completion data set, thereby generating a plurality of visual elements and a plurality of corresponding prior frame parameter sets. It is emphasized that the visual design complete data set may also be stored in a node of a blockchain in order to further ensure the privacy and security of the visual design complete data set.
It should be noted that one visual element corresponds to one prior frame parameter set, the prior frame parameter set includes a plurality of different prior frame parameters, and the plurality of visual design completion data form a visual design completion data set, so the visual design completion data set includes a plurality of visual design completion data, and each visual design completion data exists in a format that does not include an editing layer, such as PNG, JPG, and the like.
The server acquires a visual design completion data set used for training a visual model, the visual design completion data comprises a plurality of visual elements such as LOGO, icons and the like, the server firstly identifies the visual elements in the visual design completion data, and then trains the recognition model based on the identification result. The server identifies each visual design completion data in the visual design completion data set by adopting a preset object identification and positioning algorithm based on a deep neural network, namely YOLOv4, the identification result comprises a plurality of prior frame parameter sets, the prior frame parameter sets comprise confidence coefficient information, horizontal coordinate information, vertical coordinate information, length information and width information, the server determines a rectangular frame according to the prior frame parameter sets, and then determines a plurality of visual elements according to the rectangular frame and the prior frame parameter sets to obtain a plurality of visual elements.
Specifically, the server firstly acquires a visual design completion data set which is a picture data set, then performs feature extraction on each visual design completion data to generate a visual design completion feature corresponding to each visual design completion data so as to obtain a visual design completion feature set, then performs calculation based on the visual design completion feature set to generate a plurality of visual design completion prior frame sets and a plurality of prior frame parameters, wherein one prior frame corresponds to one prior frame parameter; and finally, the server generates a plurality of visual elements based on a plurality of prior frame parameters and a prior frame set.
The server performs feature extraction on visual design completion data through a basic neural network in YOLOv4 to generate a visual design completion feature set, the server determines a preset central point according to each visual design completion feature in the visual design completion data set, then calculates sample weights according to the central points, clusters the widths and heights of all objects in the visual design completion features according to the sample weights to generate a plurality of prior frame parameter sets, assumes that one prior frame parameter is (x, y, h, w, p), then determines an initial prior frame set by combining abscissa information x, ordinate information y, height information h and width information w in the prior frame parameters, then combines confidence p and the initial frame set, and finally generates a plurality of visual elements, such as icon 1, icon 2 and icon 3.
The server generates a plurality of visual elements based on a plurality of prior frame parameters in combination with the prior frame set, and the generating comprises:
the server respectively reads the confidence coefficient parameter corresponding to each prior frame from the plurality of prior frame parameter sets to obtain a plurality of confidence coefficient parameters, then the prior frame sets are screened based on the plurality of confidence coefficient parameters by adopting a non-maximum suppression algorithm to obtain a plurality of target prior frames, and the objects in the target prior frames are visual elements, so that a plurality of visual elements are obtained.
Non-maximum suppression algorithms (NMS) are widely used in computer vision tasks, such as edge detection, face detection, target detection (DPM, YOLO, SSD, fast R-CNN), etc., and the Non-maximum suppression algorithms are implemented in different applications in different ways, but the ideas are the same, and the essential idea is to search for local maximum and suppress Non-maximum elements.
For example, a target object has six prior frames, which are A, B, C, D, E and F respectively, assuming that the confidence coefficient parameter of the target detection frame of the prior frame F is the maximum, the server makes F the maximum confidence coefficient parameter prior frame, and then the server judges whether the overlapping degree of the other confidence coefficient parameter prior frames A-E and F is greater than a second threshold; if the overlapping degree of B, D and F is greater than a second threshold, deleting other confidence coefficient parameter prior frames B and D, and marking a maximum confidence coefficient parameter prior frame F which is a prior frame needing to be reserved; selecting the E with the maximum confidence coefficient parameter of the prior frames from the rest prior frames A, C, E as the prior frame with the maximum confidence coefficient parameter, then judging the overlapping degree of the prior frame E with other confidence coefficient parameters, namely the prior frames A and C, if the overlapping degree is greater than a second threshold value, deleting the prior frames A and C with other confidence coefficient parameters, marking the prior frame E with the maximum confidence coefficient parameter, and taking the prior frame E with the maximum confidence coefficient parameter as a target prior frame; the object in the target prior frame is a visual element, and the process is repeated all the time to obtain a plurality of visual elements.
202. Marking the plurality of visual elements to generate a plurality of visual element marks;
the server marks the plurality of visual elements, generating a plurality of visual element marks.
For example, for each piece of visual design completion data, the server marks the visual elements in the piece of visual design completion data, and assuming that the piece of visual design completion data a1 includes 3 visual elements, the marking generates a visual element label of "icon 1", a visual element label of "icon 2", and a visual element label of "icon 3"; the visual design completion data a2 includes 3 visual elements, and the marking generates a visual element mark of "icon 1", a visual element mark of "icon 2", and a visual element mark of "icon 3"; the visual design completion data a3 includes 4 visual elements, and the marking generates a visual element mark of "icon 2", a visual element mark of "icon 3", a visual element mark of "icon 4", and a visual element mark of "icon 5"; the visual design completion data a4 includes 4 visual elements, and the marking generates a visual element mark of "icon 2", a visual element mark of "icon 3", a visual element mark of "icon 4", and a visual element mark of "icon 5"; the visual design completion data a5 includes 4 visual elements, and the marking generates a visual element mark of "icon 3", a visual element mark of "icon 4", a visual element mark of "icon 5", and a visual element mark of "icon 6"; visual design completion data A6 includes 4 visual elements, which when marked, generate the visual element label of "icon 3", the visual element label of "icon 4", the visual element label of "icon 5", and the visual element label of "icon 6".
203. Classifying the visual design completion data set by combining a plurality of visual element marks and corresponding prior frame parameter sets to generate a multi-class visual element data set;
and the service constructs a data set of the visual design completion data set according to the plurality of visual element marks and the corresponding prior frame parameter sets, namely, the visual design completion data of the same type is divided into one type of data, so that a plurality of types of visual element data sets are generated.
Specifically, the server reads a plurality of position information parameters from a plurality of prior frame parameter sets respectively, then divides the visual design completion data by combining the visual element marks and the corresponding position information parameters, and divides the visual design completion data with the same visual element marks and the same position information parameters into a class of visual element data sets, so that a plurality of classes of visual element data sets are obtained.
Firstly, reading an abscissa parameter, an ordinate parameter, a width parameter and a length parameter from a priori frame parameter set, wherein the abscissa parameter and the ordinate parameter are parameters of an abscissa and an ordinate of the center of a priori frame, the width parameter and the length parameter are the width and the length of the priori frame, and the position information parameter of the visual element mark can be determined based on the abscissa parameter, the ordinate parameter, the width parameter and the length parameter.
Continuing with the example of step 202 described above, it is assumed that the position information parameters of the visual design completion data a1 are B1, B2, and B3, respectively; the position information parameters of the visual design completion data a2 are B1, B2, and B3, respectively; the position information parameters of the visual design completion data a3 are B1, B2, B3, and B4, respectively; the position information parameters of the visual design completion data a4 are B1, B2, B3, and B4, respectively; the position information parameters of the visual design completion data a5 are B2, B3, B4, and B5, respectively; the position information parameters of the visual design completion data a6 are B2, B3, B4, and B5, respectively. In combination with the above example of the visual element markers, it may be determined that the visual element markers of the visual design completion data a1 and the visual design completion data a2 are completely consistent with the corresponding position information parameters, and the visual design completion data a1 and the visual design completion data a2 are determined as a type of visual element data set; if it can be determined that the visual element markers and the corresponding position information parameters of the visual design completion data A3 and the visual design completion data a4 are completely consistent, determining the visual design completion data A3 and the visual design completion data a4 as a type of visual element data set; if the visual element marks of the visual design completion data A5 and the visual design completion data A6 are completely consistent with the corresponding position information parameters, the visual design completion data A5 and the visual design completion data A6 are determined as a type of visual element data set; thereby obtaining a multi-class visual element dataset.
204. Performing model training according to the multi-type visual element data sets to generate a visual original model;
and the server trains the model according to the multi-class visual element data set to generate a visual manuscript model.
The server divides each type of visual element data into a training element data set, a verification element data set and a test element data set, trains the training element data set by adopting a deep learning model to generate an initial visual manuscript model, then inputs the verification element data set into the initial visual manuscript model for verification, thereby completing parameter adjusting operation, generating the visual manuscript model, and evaluating the generalization ability of the visual manuscript model by adopting the test element data set.
205. And acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and a visual manuscript model.
The server acquires an interactive design draft, wherein the interactive design draft is a design draft generated by preprocessing according to interactive requirements, and then at least two target visual manuscripts are generated by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and a visual manuscript model.
In this embodiment, the design specification configuration file may include identification information, wherein the identification information is added to the design specification configuration file in advance. When the design specification configuration file comprises the identity identification information, the server can refer to the historical visual manuscript matched with the identity identification information to generate at least two target visual manuscripts; and when the design specification configuration file does not comprise the identity identification information, the server generates at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
Specifically, the server acquires an interactive design draft, and judges whether a preset design specification configuration file includes identity identification information, wherein the interactive design draft is a design draft which is processed in advance according to interactive requirements; if the design specification configuration file comprises identity recognition information, the server reads a historical visual manuscript matched with the identity recognition information from the database; the server inputs the interactive design draft into a visual manuscript model, and generates at least two target visual manuscripts by referring to historical visual manuscripts; and if the design specification configuration file does not contain the identity identification information, the server generates at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
If the design specification configuration file is judged to comprise the identity identification information, the server reads the historical visual manuscript matched with the identity identification information from the database, then the server inputs the interactive design manuscript into the visual manuscript model, the historical visual manuscript is referred to, at least two target visual manuscripts are generated, the historical visual manuscripts of the same identity identification information can generate the target visual manuscripts with uniform styles, and therefore the target visual manuscripts have coherence.
If the design specification configuration file does not include the identity identification information, the server generates at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model, and the method specifically comprises the following steps:
the server standardizes at least two initial visual originals by adopting a preset design specification configuration file to generate at least two visual originals meeting the specification, and finally performs style configuration on the at least two visual originals meeting the specification by adopting a preset style configuration file to generate at least two target visual originals.
The target visual document is an editable PSD format document. The design specification configuration file defines a series of specifications such as the title specification, the information specification, the picture specification and the like of the original document generated at this time. The canonical configuration file in this embodiment is a digitized configuration file, such as a font "sons", and it is assumed that the corresponding relationship of the number "1" in the server is "sons". Correspondingly, the font is defined as "font 1" in the specification configuration file. According to the standard configuration file, the server can automatically generate a visual design manuscript which meets the design standard, the standard configuration file is an editable configuration file, and the standard configuration file can be adjusted according to an adjusting instruction. The style configuration file can contain design style parameters specified by a designer and materials required to be inserted in the manuscript and given by the designer, and the server performs style configuration according to the style configuration file to generate at least two target visual manuscripts. For example, if the style configuration file includes a "quadratic element cartoon style" parameter, the style configuration is performed on at least two visual originals meeting the specification according to the parameter, and at least two target visual originals are generated.
In the embodiment of the invention, the visual manuscript model is used, so that the initial visual manuscript meeting the requirement can be accurately generated based on the interactive design manuscript, the design specification configuration file and the style configuration file are used for carrying out specification processing and style configuration on the initial visual manuscript, and the target visual manuscript is generated, thereby solving the problem that the requirement cannot be systematically analyzed, and improving the accuracy of generating the visual manuscript meeting the interactive requirement.
In the above description of the method for generating a visual design document according to the embodiment of the present invention, referring to fig. 3, a device for generating a visual design document according to the embodiment of the present invention is described below, and an embodiment of the device for generating a visual design document according to the embodiment of the present invention includes:
an obtaining module 301, configured to obtain a visual design completion data set, perform element identification on the visual design completion data set, and generate a plurality of visual elements and a plurality of prior frame parameter sets, where the visual design completion data set is a picture data set;
a constructing module 302, configured to construct a data set according to the multiple visual elements and the corresponding prior frame parameter sets, so as to generate multiple types of visual element data sets;
the training module 303 is configured to perform model training according to the multiple types of visual element data sets to generate a visual original model;
a generating module 304, configured to obtain an interactive design, and generate at least two target visual manuscripts by combining the interactive design, a preset design specification configuration file, a preset style configuration file, and the visual manuscript model.
In the embodiment of the invention, the visual manuscript model is used, so that the initial visual manuscript meeting the requirement can be accurately generated based on the interactive design manuscript, the design specification configuration file and the style configuration file are used for carrying out specification processing and style configuration on the initial visual manuscript, and the target visual manuscript is generated, thereby solving the problem that the requirement cannot be systematically analyzed, and improving the accuracy of generating the visual manuscript meeting the interactive requirement.
Referring to fig. 4, another embodiment of the apparatus for generating a visual design original according to the embodiment of the present invention includes:
an obtaining module 301, configured to obtain a visual design completion data set, perform element identification on the visual design completion data set, and generate a plurality of visual elements and a plurality of prior frame parameter sets, where the visual design completion data set is a picture data set;
a constructing module 302, configured to construct a data set according to the multiple visual elements and the corresponding prior frame parameter sets, so as to generate multiple types of visual element data sets;
the training module 303 is configured to perform model training according to the multiple types of visual element data sets to generate a visual original model;
a generating module 304, configured to obtain an interactive design, and generate at least two target visual manuscripts by combining the interactive design, a preset design specification configuration file, a preset style configuration file, and the visual manuscript model.
Optionally, the obtaining module 301 includes:
an obtaining unit 3011, configured to obtain a visual design completion data set, where the visual design completion data set is a picture data set;
a feature extraction unit 3012, configured to perform feature extraction on each visual design completion data in the visual design completion data set by using an image recognition algorithm, so as to generate a visual design completion feature set;
a calculating unit 3013, configured to calculate the visual design completion feature set, and generate a plurality of prior frame sets of the visual design completion and a plurality of prior frame parameter sets, where the plurality of prior frame sets of the visual design completion correspond to the plurality of prior frame parameter sets one to one;
a first generating unit 3014, configured to generate a plurality of visual elements based on the plurality of prior frame sets of visual design completion and the plurality of prior frame parameter sets.
Optionally, the first generating unit 3014 may be further specifically configured to:
respectively reading a plurality of confidence coefficient parameter sets from the plurality of prior frame parameter sets, wherein the confidence coefficient parameter sets correspond to the prior frame sets one by one;
and screening the plurality of prior frame sets based on the plurality of confidence coefficient parameter sets by adopting a non-maximum suppression algorithm to generate a plurality of visual elements.
Optionally, the building module 302 includes:
a marking unit 3021, configured to mark the plurality of visual elements, and generate a plurality of visual element marks;
a classifying unit 3022, configured to classify the visual design completion data set by combining the multiple visual element labels and the corresponding prior frame parameter sets, so as to generate a multi-class visual element data set.
Optionally, the classification unit 3022 may be further specifically configured to:
reading a plurality of position information parameters from a plurality of prior frame parameter sets corresponding to the plurality of visual element markers, the plurality of position information parameters corresponding to the plurality of visual element markers one-to-one;
and dividing the visual design completion data with the same position information parameters and the same visual element marks into a class of visual element data sets to obtain a plurality of classes of visual element data sets.
Optionally, the generating module 304 includes:
a determining unit 3041, configured to obtain an interactive design draft, and determine whether a preset design specification configuration file includes identity identification information, where the interactive design draft is a design draft that is processed in advance according to an interactive requirement;
a reading unit 3042, configured to read, if the design specification configuration file includes identity identification information, a historical visual original matched with the identity identification information from a database;
a second generating unit 3043, configured to input the interactive design into the visual original model, and generate at least two target visual originals with reference to the historical visual originals;
a third generating unit 3044, if the design specification configuration file does not include the identification information, configured to generate at least two target visual manuscripts by combining the interactive design script, the preset design specification configuration file, the preset style configuration file, and the visual manuscript model.
Optionally, the third generating unit 3044 may be further specifically configured to:
inputting the interactive design draft into the visual manuscript model to generate at least two initial visual manuscripts;
performing standard processing on the at least two initial visual originals by combining a preset design specification configuration file to generate at least two visual originals meeting the specification;
and performing style configuration on the at least two visual manuscripts meeting the specification by combining a preset style configuration file to generate at least two target visual manuscripts.
In the embodiment of the invention, the visual manuscript model is used, so that the initial visual manuscript meeting the requirement can be accurately generated based on the interactive design manuscript, the design specification configuration file and the style configuration file are used for carrying out specification processing and style configuration on the initial visual manuscript, and the target visual manuscript is generated, thereby solving the problem that the requirement cannot be systematically analyzed, and improving the accuracy of generating the visual manuscript meeting the interactive requirement.
Fig. 3 and 4 above describe the apparatus for generating a visual design original in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the apparatus for generating a visual design original in the embodiment of the present invention is described in detail from the perspective of the hardware processing.
Fig. 5 is a schematic structural diagram of a device for generating a visual design document according to an embodiment of the present invention, where the device 500 for generating a visual design document may have relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the generation apparatus 500 of the visual design original. Still further, processor 510 may be configured to communicate with storage medium 530 to execute a series of instruction operations in storage medium 530 on visual design original generating device 500.
The visual design script generation apparatus 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, and the like. Those skilled in the art will appreciate that the configuration of the apparatus for generating a visual design original shown in fig. 5 does not constitute a limitation of the apparatus for generating a visual design original, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The present invention also provides a device for generating a visual design original, wherein the computer device comprises a memory and a processor, the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the method for generating a visual design original in the above embodiments.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions that, when executed on a computer, cause the computer to perform the steps of the method for generating a visual design original.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for generating a visual design original, comprising:
acquiring a visual design completion data set, performing element identification on the visual design completion data set, and generating a plurality of visual elements and a plurality of prior frame parameter sets, wherein the visual design completion data set is a picture data set;
constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set;
performing model training according to the multi-type visual element data sets to generate a visual original model;
and acquiring an interactive design draft, and generating at least two target visual manuscripts by combining the interactive design draft, a preset design specification configuration file, a preset style configuration file and the visual manuscript model.
2. The method of generating a visual design original according to claim 1, wherein the obtaining a visual design completion data set, performing element recognition on the visual design completion data set, and generating a plurality of visual elements and a plurality of prior frame parameter sets, the visual design completion data set being a picture data set includes:
acquiring a visual design completion data set, wherein the visual design completion data set is a picture data set;
performing feature extraction on each visual design completion data in the visual design completion data set by adopting an image recognition algorithm to generate a visual design completion feature set;
calculating the visual design completion feature set to generate a plurality of visual design completion prior frame sets and a plurality of prior frame parameter sets, wherein the plurality of visual design completion prior frame sets correspond to the plurality of prior frame parameter sets one by one;
generating a plurality of visual elements based on the plurality of prior box sets and the plurality of prior box parameter sets.
3. The method of generating a visual design original according to claim 2, wherein generating a plurality of visual elements based on the plurality of prior frame sets of visual design completion and the plurality of prior frame parameter sets comprises:
respectively reading a plurality of confidence coefficient parameter sets from the plurality of prior frame parameter sets, wherein the confidence coefficient parameter sets correspond to the prior frame sets one by one;
and screening the plurality of prior frame sets based on the plurality of confidence coefficient parameter sets by adopting a non-maximum suppression algorithm to generate a plurality of visual elements.
4. The method of claim 1, wherein the constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter set, and generating a multi-class visual element data set comprises:
marking the plurality of visual elements to generate a plurality of visual element marks;
and classifying the visual design completion data set by combining the plurality of visual element marks and the corresponding prior frame parameter sets to generate a multi-class visual element data set.
5. The method of claim 4, wherein the classifying the visual design completion dataset in combination with the plurality of visual element labels and the corresponding prior frame parameter sets, generating a multiclass visual element dataset comprises:
reading a plurality of position information parameters from a plurality of prior frame parameter sets corresponding to the plurality of visual element markers, the plurality of position information parameters corresponding to the plurality of visual element markers one-to-one;
and dividing the visual design completion data with the same position information parameters and the same visual element marks into a class of visual element data sets to obtain a plurality of classes of visual element data sets.
6. The method of claim 1, wherein the obtaining an interactive design, and combining the interactive design, a preset design specification profile, a preset style profile, and the visual original model to generate at least two target visual originals comprises:
acquiring an interactive design draft, and judging whether a preset design specification configuration file comprises identity identification information or not, wherein the interactive design draft is a design draft which is processed in advance according to interactive requirements;
if the design specification configuration file comprises identity recognition information, reading a historical visual manuscript matched with the identity recognition information from a database;
inputting the interactive design draft into the visual manuscript model, and generating at least two target visual manuscripts by referring to the historical visual manuscripts;
and if the design specification configuration file does not contain identity identification information, generating at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
7. The method of claim 6, wherein if the design specification profile does not include identification information, generating at least two target visual originals in combination with the interactive design, a preset design specification profile, a preset style profile, and the visual original model comprises:
inputting the interactive design draft into the visual manuscript model to generate at least two initial visual manuscripts;
performing standard processing on the at least two initial visual originals by combining a preset design specification configuration file to generate at least two visual originals meeting the specification;
and performing style configuration on the at least two visual manuscripts meeting the specification by combining a preset style configuration file to generate at least two target visual manuscripts.
8. An apparatus for creating a visual design original, comprising:
the acquisition module is used for acquiring a visual design completion data set, performing element identification on the visual design completion data set and generating a plurality of visual elements and a plurality of prior frame parameter sets, wherein the visual design completion data set is a picture data set;
the construction module is used for constructing a data set according to the plurality of visual elements and the corresponding prior frame parameter sets to generate a multi-class visual element data set;
the training module is used for carrying out model training according to the multi-type visual element data sets to generate a visual original model;
and the generating module is used for acquiring the interactive design draft and generating at least two target visual manuscripts by combining the interactive design draft, the preset design specification configuration file, the preset style configuration file and the visual manuscript model.
9. A generation apparatus of a visual design original, characterized by comprising: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the generation apparatus of the visual design original to execute the generation method of the visual design original according to any one of claims 1 to 7.
10. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement a method of generating a visual design original according to any one of claims 1 to 7.
CN202011578001.6A 2020-12-28 2020-12-28 Method, device and equipment for generating visual design manuscript and storage medium Pending CN112529106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011578001.6A CN112529106A (en) 2020-12-28 2020-12-28 Method, device and equipment for generating visual design manuscript and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011578001.6A CN112529106A (en) 2020-12-28 2020-12-28 Method, device and equipment for generating visual design manuscript and storage medium

Publications (1)

Publication Number Publication Date
CN112529106A true CN112529106A (en) 2021-03-19

Family

ID=74976789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011578001.6A Pending CN112529106A (en) 2020-12-28 2020-12-28 Method, device and equipment for generating visual design manuscript and storage medium

Country Status (1)

Country Link
CN (1) CN112529106A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867526A (en) * 2007-02-14 2013-01-09 缪斯亚米有限公司 Collaborative music creation
CN109710523A (en) * 2018-12-18 2019-05-03 平安科技(深圳)有限公司 Method for generating test case and device, storage medium, the electronic equipment of vision original text
CN109784196A (en) * 2018-12-20 2019-05-21 哈尔滨工业大学深圳研究生院 Visual information, which is sentenced, knows method, apparatus, equipment and storage medium
CN110069300A (en) * 2019-03-13 2019-07-30 深圳壹账通智能科技有限公司 Vision original text generation method, device, medium and electronic equipment
CN110751232A (en) * 2019-11-04 2020-02-04 哈尔滨理工大学 Chinese complex scene text detection and identification method
CN110795666A (en) * 2019-10-18 2020-02-14 腾讯科技(深圳)有限公司 Webpage generation method, device, terminal and storage medium
CN111709467A (en) * 2020-06-04 2020-09-25 哈尔滨工业大学 Product and cloud data matching system and method based on machine vision
CN111784642A (en) * 2020-06-10 2020-10-16 中铁四局集团有限公司 Image processing method, target recognition model training method and target recognition method
CN112115873A (en) * 2020-09-21 2020-12-22 南京市公安局水上分局 Diatom automatic detection method and system based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867526A (en) * 2007-02-14 2013-01-09 缪斯亚米有限公司 Collaborative music creation
CN109710523A (en) * 2018-12-18 2019-05-03 平安科技(深圳)有限公司 Method for generating test case and device, storage medium, the electronic equipment of vision original text
CN109784196A (en) * 2018-12-20 2019-05-21 哈尔滨工业大学深圳研究生院 Visual information, which is sentenced, knows method, apparatus, equipment and storage medium
CN110069300A (en) * 2019-03-13 2019-07-30 深圳壹账通智能科技有限公司 Vision original text generation method, device, medium and electronic equipment
CN110795666A (en) * 2019-10-18 2020-02-14 腾讯科技(深圳)有限公司 Webpage generation method, device, terminal and storage medium
CN110751232A (en) * 2019-11-04 2020-02-04 哈尔滨理工大学 Chinese complex scene text detection and identification method
CN111709467A (en) * 2020-06-04 2020-09-25 哈尔滨工业大学 Product and cloud data matching system and method based on machine vision
CN111784642A (en) * 2020-06-10 2020-10-16 中铁四局集团有限公司 Image processing method, target recognition model training method and target recognition method
CN112115873A (en) * 2020-09-21 2020-12-22 南京市公安局水上分局 Diatom automatic detection method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110796154B (en) Method, device and equipment for training object detection model
US10573044B2 (en) Saliency-based collage generation using digital images
EP2808828A2 (en) Image matching method, image matching device, model template generation method, model template generation device, and program
CN111985323B (en) Face recognition method and system based on deep convolutional neural network
JP2019102061A (en) Text line segmentation method
JP2019102061A5 (en)
CN110909868A (en) Node representation method and device based on graph neural network model
JP2017102865A (en) Information processing device, information processing method and program
CN115797962B (en) Wall column identification method and device based on assembly type building AI design
US7668336B2 (en) Extracting embedded information from a document
CN112085078A (en) Image classification model generation system, method and device and computer equipment
CN108509401B (en) Contract generation method and device, computer equipment and storage medium
CN110827301B (en) Method and apparatus for processing image
CN113963353A (en) Character image processing and identifying method and device, computer equipment and storage medium
CN111144215A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113705650B (en) Face picture set processing method, device, medium and computing equipment
JP2020160543A (en) Information processing system and information processing method
CN111898408A (en) Rapid face recognition method and device
CN112529106A (en) Method, device and equipment for generating visual design manuscript and storage medium
US20220406082A1 (en) Image processing apparatus, image processing method, and storage medium
CN116311297A (en) Electronic evidence image recognition and analysis method based on computer vision
CN116310568A (en) Image anomaly identification method, device, computer readable storage medium and equipment
Nasiri et al. A new binarization method for high accuracy handwritten digit recognition of slabs in steel companies
CN111368674B (en) Image recognition method and device
JP7396505B2 (en) Model generation program, model generation method, and model generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination