CN109583509B - Data generation method and device and electronic equipment - Google Patents

Data generation method and device and electronic equipment Download PDF

Info

Publication number
CN109583509B
CN109583509B CN201811523178.9A CN201811523178A CN109583509B CN 109583509 B CN109583509 B CN 109583509B CN 201811523178 A CN201811523178 A CN 201811523178A CN 109583509 B CN109583509 B CN 109583509B
Authority
CN
China
Prior art keywords
background image
graphs
graph
example graph
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811523178.9A
Other languages
Chinese (zh)
Other versions
CN109583509A (en
Inventor
魏秀参
宾言锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kuangyun Technology Co ltd
Beijing Kuangshi Technology Co Ltd
Original Assignee
Nanjing Kuangyun Technology Co ltd
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Kuangyun Technology Co ltd, Beijing Kuangshi Technology Co Ltd filed Critical Nanjing Kuangyun Technology Co ltd
Priority to CN201811523178.9A priority Critical patent/CN109583509B/en
Publication of CN109583509A publication Critical patent/CN109583509A/en
Application granted granted Critical
Publication of CN109583509B publication Critical patent/CN109583509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Abstract

The invention provides a data generation method, a data generation device and electronic equipment, and belongs to the technical field of image processing. The data generation method comprises the following steps: acquiring a background image and acquiring a plurality of example graphs of a target object; adding the multiple example graphs into a background image, determining the positions of the multiple example graphs, and enabling the distance between any two example graphs in the multiple example graphs to be within a preset range; and synthesizing the multiple example graphs and the background image into a training image according to the determined position. According to the data generation method, the data generation device and the electronic equipment, the data volume of the training data set is increased by adding the example graph of the target object into the background image and synthesizing the training image, so that the diversity of the training data can be improved, the training effect of the model can be improved, and the stability of the model obtained by training can be enhanced.

Description

Data generation method and device and electronic equipment
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a data generation method and device and electronic equipment.
Background
In recent years, deep learning techniques have been developed and widely used in the fields of object detection, behavior recognition, and the like. The current deep learning technology is mostly realized based on a convolutional neural network. When the convolutional neural network is trained, the requirements on the quantity and diversity of training data are high, and the final performance of the convolutional neural network is in direct proportion to the richness of the training data.
At present, the number of training data is increased by performing random scale change, random rotation or random horizontal rotation on the image. The convolutional neural network trained by the training data obtained by the method is easy to output wrong detection results when detecting images with overlapped components or disordered backgrounds. Therefore, the stability of the convolutional neural network processing complex background images cannot be guaranteed by the method.
Disclosure of Invention
In view of this, the present invention provides a data generation method, an apparatus and an electronic device, which can increase the amount of data used for training, improve the diversity of data, and facilitate to improve the stability of a model obtained by training.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a data generation method, including:
acquiring a background image and acquiring a plurality of example graphs of a target object;
adding the multiple example graphs into the background image, determining the positions of the multiple example graphs, and enabling the distance between any two example graphs in the multiple example graphs to be within a preset range;
and synthesizing the plurality of example graphs and the background image into a training image according to the determined position.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of acquiring a background image includes:
extracting a first image not containing a target object from a pre-acquired training data set as a background image;
the method comprises the following steps of obtaining a plurality of example graphs of a target object, wherein the steps comprise:
extracting at least one second image containing a target object from the training data set;
segmenting a plurality of instance maps of the target object from the at least one second image.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of acquiring a background image and acquiring multiple example diagrams of a target object includes:
extracting a first image not containing a target object from a pre-acquired training data set as a background image to generate a candidate background set;
extracting a second image containing a target object from the training data set, segmenting an example graph of the target object from the second image, and generating a candidate example set;
selecting a background image from the candidate background set, selecting a plurality of example graphs from the candidate example set, and correspondingly forming example background pairs by the selected background image and the example graphs; the example graph and the background image in the example background pair are used to synthesize a training image.
With reference to the second possible implementation manner of the first aspect, the embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of correspondingly forming an example background pair by using the selected background image and the multiple example graphs includes:
scaling the instance graph segmented from the second image;
and correspondingly forming an example background pair by the selected background image, the example graph segmented from the second image and the scaled example graph.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of adding the multiple example graphs to the background image and determining the positions of the multiple example graphs includes:
randomly adding the multiple instance graphs to any position of the background image;
determining a proximity penalty value from a location of each instance graph of the plurality of instance graphs on the background image;
adjusting a position of the instance map on the background image according to the proximity loss value.
With reference to the fourth possible implementation manner of the first aspect, the embodiment of the present invention provides a fifth possible implementation manner of the first aspect, wherein the step of determining the proximity loss value according to the position of each of the multiple example graphs on the background image includes:
respectively determining an integral tension loss value and a finishing thrust loss value according to the position of each example graph on the background image;
and determining the adjacent loss value according to the integral tension loss value and the integral thrust loss value.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, wherein the step of determining an overall tension loss value according to a position of each example graph on the background image includes:
taking each example graph on the background image as a current example graph one by one, and calculating the distance between the current example graph and each other example graph on the background image; determining an adjacent example graph closest to the current example graph, and combining the current example graph and the adjacent example graph into a group of similar example graph pairs;
all the obtained similar example graph pairs form an example graph pair set;
determining a tension loss value of each group of similar example graph pairs in the example graph pair set;
and taking the sum of the tension loss values of the graph pairs of each group of similar examples as the integral tension loss value.
With reference to the sixth possible implementation manner of the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, wherein the step of determining the overall thrust loss value according to the position of each example graph on the background image includes:
combining any two example graphs on the background image into a group of example graph pairs to obtain a plurality of groups of example graph pairs;
selecting non-close example graph pairs out of the example graph pair set from the plurality of groups of example graph pairs, and calculating the thrust loss value of each group of non-close example graph pairs;
and taking the sum of the thrust loss values of each group of non-similar example graph pairs as the integral thrust loss value.
With reference to the fifth possible implementation manner of the first aspect, the embodiment of the present invention provides an eighth possible implementation manner of the first aspect, wherein the step of determining the proximity loss value according to the overall tension loss value and the overall thrust loss value includes:
and multiplying the integral tension loss value by a preset coefficient, and adding the integral thrust loss value to obtain the adjacent loss value.
With reference to the first aspect, an embodiment of the present invention provides a ninth possible implementation manner of the first aspect, where after the step of adding the multiple example graphs to the background image and determining the positions of the multiple example graphs, the method further includes:
determining an occluded area ratio for each instance graph on the background image; the ratio of the blocked area is the ratio of the area of the blocked part in the example graph to the total area of the example graph;
and deleting the example graph with the shielded area ratio larger than the set threshold value.
With reference to the first aspect, an embodiment of the present invention provides a tenth possible implementation manner of the first aspect, where after the step of adding the multiple example graphs to the background image and determining the positions of the multiple example graphs, the method further includes:
setting a label of an occluded portion of the instance graph on the background image as an occlusion.
In a second aspect, an embodiment of the present invention further provides a data generating apparatus, including:
the element acquisition module is used for acquiring a background image and acquiring a plurality of example graphs of a target object;
the position determining module is used for adding the plurality of example graphs into the background image, determining the positions of the plurality of example graphs and enabling the distance between any two example graphs in the plurality of example graphs to be within a preset range;
and the data generation module is used for synthesizing the multiple example graphs and the background image into a training image according to the determined position.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the element obtaining module is further configured to:
extracting a first image not containing a target object from a pre-acquired training data set as a background image;
extracting at least one second image containing a target object from the training data set;
segmenting a plurality of instance maps of the target object from the at least one second image.
With reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the element obtaining module is further configured to:
extracting a first image not containing a target object from a pre-acquired training data set as a background image to generate a candidate background set;
extracting a second image containing a target object from the training data set, segmenting an example graph of the target object from the second image, and generating a candidate example set;
selecting a background image from the candidate background set, selecting a plurality of example graphs from the candidate example set, and correspondingly forming example background pairs by the selected background image and the example graphs; the example graph and the background image in the example background pair are used to synthesize a training image.
With reference to the second possible implementation manner of the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the element obtaining module is further configured to: scaling the instance graph segmented from the second image; and correspondingly forming an example background pair by the selected background image, the example graph segmented from the second image and the scaled example graph.
With reference to the second aspect, an embodiment of the present invention provides a fourth possible implementation manner of the second aspect, where the position determining module is further configured to:
randomly adding the multiple instance graphs to any position of the background image;
determining a proximity penalty value from a location of each instance graph of the plurality of instance graphs on the background image;
adjusting a position of the instance map on the background image according to the proximity loss value.
With reference to the fourth possible implementation manner of the second aspect, an embodiment of the present invention provides a fifth possible implementation manner of the second aspect, wherein the position determining module is further configured to:
respectively determining an integral tension loss value and a finishing thrust loss value according to the position of each example graph on the background image;
and determining the adjacent loss value according to the integral tension loss value and the integral thrust loss value.
With reference to the fifth possible implementation manner of the second aspect, the embodiment of the present invention provides a sixth possible implementation manner of the second aspect, wherein the position determining module is further configured to:
taking each example graph on the background image as a current example graph one by one, and calculating the distance between the current example graph and each other example graph on the background image;
taking the example graph closest to the current example graph as an adjacent example graph of the current example graph, and combining the current example graph and the adjacent example graph into a group of similar example graph pairs;
all the obtained similar example graph pairs form an example graph pair set;
determining a tension loss value of each group of similar example graph pairs in the example graph pair set;
and taking the sum of the tension loss values of the graph pairs of each group of similar examples as the integral tension loss value.
With reference to the fifth possible implementation manner of the second aspect, the embodiment of the present invention provides a seventh possible implementation manner of the second aspect, wherein the position determining module is further configured to:
combining any two example graphs on the background image into a group of example graph pairs to obtain a plurality of groups of example graph pairs;
selecting non-close example graph pairs out of the example graph pair set from the plurality of groups of example graph pairs, and calculating the thrust loss value of each group of non-close example graph pairs;
and taking the sum of the thrust loss values of each group of the similar example graph pairs as the integral thrust loss value.
With reference to the fifth possible implementation manner of the second aspect, the embodiment of the present invention provides an eighth possible implementation manner of the second aspect, wherein the position determining module is further configured to:
and multiplying the integral tension loss value by a preset coefficient, and adding the integral thrust loss value to obtain the adjacent loss value.
With reference to the fourth possible implementation manner of the second aspect, the embodiment of the present invention provides a tenth possible implementation manner of the second aspect, where the position determining module is further configured to:
determining an occluded area ratio for each instance graph on the background image; the ratio of the blocked area is the ratio of the area of the blocked part in the example graph to the total area of the example graph;
and deleting the example graph with the shielded area ratio larger than the set threshold value.
With reference to the fourth possible implementation manner of the second aspect, the embodiment of the present invention provides an eleventh possible implementation manner of the second aspect, wherein the position determining module is further configured to:
setting a label of an occluded portion of the instance graph on the background image as an occlusion.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor;
the memory has stored therein a computer program operable on the processor, which when executed implements the steps of the method of any of the first aspects described above.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
The embodiment of the invention provides a data generation method, a data generation device and electronic equipment.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method of data generation provided by an embodiment of the present invention;
FIG. 3 is a flow chart illustrating another data generation method provided by embodiments of the present invention;
fig. 4 shows a block diagram of a data generating apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a training image obtained by the data generation method according to the embodiment of the present invention;
fig. 6 is a schematic diagram illustrating another training image obtained by the data generation method according to the embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem that a model obtained by training with an existing training data set is insufficient in stability when processing a complex background image, embodiments of the present invention provide a data generation method, an apparatus, an electronic device, and a computer storage medium. The following describes in detail a data generation method, an apparatus, an electronic device, and a computer storage medium according to embodiments of the present invention with reference to specific embodiments and drawings.
The first embodiment is as follows:
first, an example electronic device 100 for implementing the data generation method of the embodiment of the present invention is described with reference to fig. 1. The exemplary electronic device 100 may be a computer or a server, or may be other electronic devices, and the invention is not limited in particular.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memories 104, input devices 106, output devices 108, and communication devices 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capabilities, image processing capabilities, and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement the image segmentation functionality of embodiments of the invention described below (as implemented by the processor) and/or other desired functionality. Various applications and various data, such as various images used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The communication device 110 may include a data transmission interface or a network interface for connecting with other electronic devices and other network units. For example, the electronic device 100 may connect to a remote server via the communication device 110, and download the training data set from the remote server.
Optionally, the electronic device 100 may further comprise an image acquisition apparatus. The image capture device may capture images (e.g., photographs, videos, etc.) desired by the user and store the captured images in the storage device for use by other components.
Example two:
the embodiment provides a data generation method which can increase the data volume of training data and the diversity of the training data. Fig. 2 shows a flow chart of the data generation method. It should be noted that the steps illustrated in the flowchart of fig. 2 may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than presented herein. The present embodiment will be described in detail below.
As shown in fig. 2, the data generating method provided in this embodiment includes the following steps:
step S202, acquiring a background image and acquiring a plurality of example graphs of the target object.
The background image is a scene image that does not include the target object. The background image can be acquired by the electronic equipment through an image acquisition device, collected from a network or downloaded from a remote server.
The target object, which may be, but is not limited to, a pedestrian, a vehicle, an animal, a plant, or other object of interest, may also be part of an animal or part of a plant, varies depending on the use of the model to be trained. For example, if the model to be trained is used for Human pose estimation (Human pose), the target object may be a Human body, such as a Human body in a different pose. The target object may be a vehicle if the model to be trained is used for vehicle detection.
The example graph of the target object can be obtained by segmenting from the image containing the target object by adopting an image segmentation method. The example graph of the target object can be understood as a target object area in the image, and the example graph of the target object can be segmented from the image by using a mask network. The image containing the target object may be acquired by an image acquisition device, may be collected from a network, or may be downloaded from a remote server.
In an alternative embodiment, the public training data set may be downloaded from a remote server, and the first image not containing the target object and the second image containing the target object may be extracted from the pre-acquired training data set, respectively, with the first image being the background image. The multiple instance graphs of the target object are segmented from the second image, for example, the instance graphs of the target object can be segmented from the second image by using the existing image segmentation method (such as a mask network). If the second image contains a plurality of target objects, a plurality of example graphs of the target objects can be segmented from the second image. If the second image only contains one target object area, one example graph of the target object can be segmented from the second image, at this time, a plurality of second images can be extracted from the training data set, and the example graph of the target object can be separated from each second image, so that a plurality of example graphs of the target object can be obtained.
The obtained example graph can be subjected to scale transformation to obtain example graphs with different scales, and the example graphs are added into the background image, so that the diversity of training data can be further increased.
In another alternative embodiment, an open training data set may be downloaded from a remote server, and a first image not including a target object is extracted from a pre-acquired training data set as a background image to generate a candidate background set; extracting a second image containing the target object from the training data set, segmenting an example graph of the target object from the second image, and generating a candidate example set; selecting a background image from the candidate background set, selecting a plurality of example graphs from the candidate example set, and correspondingly forming example background pairs by the selected background image and the example graphs; the example graph and the background image in the example background pair are used to synthesize a training image.
Step S204, adding the multiple example graphs into the background image, and determining positions of the multiple example graphs so that a distance between any two example graphs in the multiple example graphs is within a preset range.
In order to make the distance between the example graphs more appropriate, and make the different example graphs neither overlap widely nor separate too far, the following method can be adopted to set the position of the example graph on the background image: all the example graphs are randomly added to any position of the background image, wherein the example graphs are positioned above the background image. And then determining a proximity loss value according to the position of each example graph on the background image, and adjusting the position of each example graph on the background image according to the proximity loss value. Wherein the adjacent loss values comprise a thrust loss value and a tension loss value, the thrust loss value is used for controlling the distance between each example graph not to be too far, and the tension loss value is used for controlling the overlapping degree between each adjacent example graph not to be too large. Through thrust loss value and pulling force loss value, can control the distance between two arbitrary embodiment graphs in a plurality of embodiment graphs in the preset range, can not be too far apart between two embodiment graphs promptly, also can not overlap the degree too greatly.
And step S206, synthesizing the multiple example graphs and the background image into a training image according to the determined position.
After the position of the example graph on the background image is determined, the pixels of the example graph can be directly overlaid on the background image, and the example graph and the background image are synthesized into a training image.
In one application scenario, training images may be added to a training data set for training a model. The training dataset is used to train the model. By the method, a large number of training images containing the target object can be generated, and the generated training images are added into the training data set, so that the condition that the original training data set contains a large proportion of negative samples can be improved. Negative examples refer to training images that do not contain a target object.
According to the data generation method provided by the embodiment of the invention, the data volume of the training data set is increased by adding the example graph of the target object into the background image and synthesizing the training image, so that the diversity of the training data can be improved, the training effect of the model can be improved, and the stability of the model obtained by training can be enhanced. The embodiment of the invention does not limit the specific application scenarios, and the model trained by the training data set obtained by the embodiment of the invention can be applied to various different practical application scenarios.
Example three:
on the basis of the above method embodiment, another data generation method is further provided in the embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S302 is to extract a first image not including the target object from a training data set acquired in advance as a background image, and generate a candidate background set.
The training data set may be downloaded from a network or a remote server, for example, the MS COCO data set may be downloaded from microsoft official website as the training data set. A part or all of the images not containing the target object in the training data set are extracted to generate a candidate background set.
Step S304, a second image containing the target object is extracted from the training data set, an example graph of the target object is segmented from the second image, and a candidate example set is generated.
A part or all of the images containing the target object in the training data set are extracted as second images. And adopting an image segmentation method to segment an example graph of the target object from each second image. And generating a candidate instance set according to all the instance graphs obtained by the segmentation.
Step S306, selecting a background image from the candidate background set, selecting a plurality of example graphs from the candidate example set, and correspondingly forming an example background pair by the selected background image and the example graphs.
The example graph and the background image in the example background pair are used to synthesize a training image. In order to obtain the example graphs of various scales, the example graph segmented from the second image can be subjected to scale transformation, and the selected background image, the example graph segmented from the second image and the scaled example graph are correspondingly combined into an example background pair. For example, the instance graph may be scaled using a Fourier transform or a variable Gaussian function such that the instance graph has 400 pixels per joint point on average.
By repeatedly executing step S306, a plurality of sets of example background pairs can be obtained. For each set of example background pairs, a training image can be obtained according to the following steps S308 to S314.
Step S308, randomly adding the instance graph in the instance background pair to any position of the background image of the instance background pair.
For example, (Insts, B)g) Represents a set of example context pairs, wherein BgRepresents a background image, and Insts represents an example graph set composed of a plurality of example graphs in an example background pair. For exampleIn the Insts, K example graphs may be included, denoted Insts ═ p1,p2,……pK},pKIs the K example diagram.
In step S310, a proximity loss value is determined according to the position of each example map on the background image.
The proximity loss value (proximitoylloss) consists of two parts, the first of which is the overall thrust loss value (pushLoss) in order to keep the individual example graphs from being too far apart. The second part is the overall loss of tension (pullLoss) value, which is intended to control the degree of overlap of two most adjacent example graphs within a given range [ T [ ]1,T2]And (4) the following steps. The proximity loss value may be determined as follows: respectively determining an integral tension loss value and a finishing thrust loss value according to the position of each example graph on the background image; and determining an adjacent loss value according to the integral tension loss value and the integral thrust loss value. The steps of determining the overall loss of tension value and collating the loss of thrust value are described below.
Determining the overall tension loss value according to the position of each example graph on the background image can comprise the following steps:
(1) taking each example graph on the background image as a current example graph one by one, and calculating the distance between the current example graph and each example graph on the background image; and taking the example graph closest to the current example graph as an adjacent example graph of the current example graph, and combining the current example graph and the adjacent example graph into a group of similar example graph pairs.
The distance between the two example graphs on the background image can be calculated as follows. For example, for the ith example graph piThe position can be expressed as:
Figure BDA0001902394850000141
wherein the content of the first and second substances,
Figure BDA0001902394850000142
as the coordinate of the upper left corner of the ith example graph,
Figure BDA0001902394850000143
is the coordinate of the lower right corner of the ith example graph.
Figure BDA0001902394850000144
Example diagram piThe area of (a).
Definition matrix RD ∈ RK*K*2Wherein
Figure BDA0001902394850000145
RDi,jDiagram p showing an exampleiAnd example graph pjIncluding the lateral distance and the longitudinal distance. Definition matrix LU ∈ RK*K*2Wherein
Figure BDA0001902394850000151
LUi,jDiagram p showing an exampleiAnd example graph pjIncluding the lateral distance and the longitudinal distance. DIFF ═ RD-LU, DIFFi,jReflects the example graph piAnd example graph pjThe positional relationship of (a).
Figure BDA0001902394850000152
If DIFFi,j,1> 0, represents example case piAnd example graph pjOn the horizontal axis, if DIFFi,j,2> 0, represents example case piAnd example graph pjIntersecting on the vertical axis, or vice versa, if DIFFi,j,1< 0, representing example graph piAnd example graph pjDisjoint in the horizontal axis, DIFFi,j,1Smaller, illustrating example diagram piAnd example graph pjThe further apart laterally. If DIFFi,j,2< 0, representing example graph piAnd example graph pjDo not intersect in the longitudinal axis, DIFFi,j,2Smaller, illustrating example diagram piAnd example graph pjThe further apart longitudinally. Defining a Distance matrix Distance ∈ RK*K
Distance=-DIFFi,j,1-DIFFi,j,2-102×interi,j+103×E
Wherein, the interi,jShowing examplesGraph piAnd example graph pjThe area of the overlap therebetween. E is a preset unit matrix. -102×interi,jIt can be guaranteed that the distance between intersecting instance graphs is smaller than the distance between non-intersecting instance graphs. 103The xE can ensure that the distance between the example graph and the graph is far larger than the distance between the example graph and other example graphs.
For any one example graph piWill be taken as an example graph piFinding a distance instance graph p according to the current instance graphiRecent example figure pǐExample graph pǐCan be represented as pǐ=argminjDistancei,Distancei∈RK,DistanceiDiagram p representing an exampleiAnd the distance between all example graphs in the example graph set Insts (i.e., each example graph on the background image). Example graph pǐAs an example figure piAdjacent example graph of (1), example graph piAnd example graph pǐForm a set of similar example graph pairs.
(2) And (4) combining all the obtained similar example graph pairs into an example graph pair set.
According to the method of step (1), K pairs of similar example graph pairs can be obtained finally, and example graph pair sets can be formed
Figure BDA0001902394850000153
(3) And determining the tension loss value of each group of similar example graph pairs in the example graph pair set.
For each set of similar example graph pairs in the example graph pair set PINTSs, the pull loss value pullLoss of a set of similar example graph pairs can be determined by the following formulai,ǐ
Figure BDA0001902394850000161
(4) And taking the sum of the tension loss values of the graph pairs of each group of similar examples as the integral tension loss value.
The overall loss of tension value pullLoss can be expressed as:
Figure BDA0001902394850000162
determining the overall thrust loss value according to the position of each example graph on the background image may comprise the following steps:
(a) any two example graphs on the background image are combined into a group of example graph pairs to obtain a plurality of groups of example graph pairs.
It can also be understood that any two example graphs in the example graph set Insts are combined into a set of example graph pairs, so that a total of two example graphs can be obtained
Figure BDA0001902394850000163
Set example graph pairs.
(b) Selecting out non-close example graph pairs outside the example graph pair set from the multiple sets of example graph pairs, and calculating the thrust loss value of each non-close example graph pair.
Selecting all the example graph pairs which do not belong to the example graph pair set PINTSs from the multiple sets of example graph pairs obtained in the step (a), using the selected example graph pairs as non-close example graph pairs, and calculating the thrust loss value of each non-close example graph pair. Wherein, the thrust loss value pullLoss of a group of non-similar example graph pairsi,j=interi,j
(c) And taking the sum of the thrust loss values of each group of non-similar example graph pairs as the integral thrust loss value.
The overall thrust loss value pushLoss can be expressed as:
Figure BDA0001902394850000164
and multiplying the integral pull loss value pullLoss by a preset coefficient, and adding the integral push loss value pushLoss to obtain an adjacent loss value proximintylloss. The proximity loss value proximintylloss can be expressed as: proximintylloss ═ pushLoss + λ × pullLoss, where λ is a preset coefficient. For example, in one embodiment, λ is 10.
In step S312, the position of the example map on the background image is adjusted according to the proximity loss value.
According to the adjacent loss value ProximitylLoss, the position of the example graph on the background image is adjusted by adopting a Stochastic Gradient Descent (SGD) method, so that the adjacent loss value ProximitylLoss is minimized, the overlapping degree of two example graphs closest to the background image is controlled in a preset overlapping range, and the distance between the example graphs is controlled not to exceed the preset distance range, so that an ideal training image can be obtained. For example, using a random gradient descent method, the proximity loss value may be continuously decreased in an iterative manner. In one embodiment, an Adam optimizer is used to set the iteration step size (or learning rate) to 0.01, and the maximum number of iterations to 5000, and if the adjacent loss value of 5 consecutive iterations is not changed, the iteration process can be terminated early.
Through the above steps S310 and S312, the example map can be more uniformly distributed in the background image.
And step S314, optimizing the example graph, and synthesizing the optimized example graph and the background image into a training image.
In an alternative embodiment, after determining the location of the instance graph on the background image through step S312, the unsatisfactory instance graph may be deleted. For example, the instance graph with the larger blocked area is deleted. Specifically, determining an occluded area ratio of each example graph on a background image, wherein the occluded area ratio is a ratio of an area of an occluded part in the example graph to a total area of the example graph; and deleting the example graph with the shielded area ratio larger than the set threshold value. Illustratively, the set threshold may be set to 0.1 or 0.2 as needed.
In another alternative embodiment, after determining the location of the example graph on the background image through step S312, the label of the occluded portion of the example graph on the background image may be set as occluded. For example, the originally visible joint point may be occluded by other example graphs after the position of the example graph is adjusted. At this time, the label of the occluded joint point can be changed from visible to occluded.
After the tuning step, the pixels of the non-shielded part (visible part) of the example graph can be directly covered on the background image, and the example graph and the background image are synthesized into the training image.
It is understood that, in some embodiments, tuning may not be performed, and after step S312, the example graph after being adjusted in position and the background image are directly synthesized into the training image. In another embodiment, step S310 and step S312 may not be executed, and the example graph in the example background pair is randomly added to the background image of the example background pair, so that the example graph and the background image may be synthesized into the training image.
In one embodiment, the training image shown in FIG. 5 may be obtained according to the above steps. For example, in the MS COCO dataset, images containing no person and images containing person may be collected separately, and an example graph of person may be segmented from the images containing person. One image is selected from the images not containing the person as a background image, and four example images are selected from the example images of the person. The four example graphs are randomly added to any position on the background image, so that the training image shown in (a) in fig. 5 can be obtained. The position of the example graph is planned according to the above steps S310 and S312, and the positions of the four example graphs on the background image are further adjusted, so that the relative positions of the four example graphs are more reasonable, and the training image shown in (b) in fig. 5 can be obtained.
In another embodiment, a rectangular box is used to represent an example graph of the target object, and the training image shown in fig. 6 can be obtained by the above method. Fig. 6 (a) shows a training image obtained by randomly adding the example map to an arbitrary position on the background image, and fig. 6 (b) shows a training image obtained by further planning the position of the example map.
By the method, a plurality of training images can be obtained as required, the training images are added to the training data set, the model is trained, the data volume of the training data set can be increased, the diversity of the training data is improved, the training effect of the model is improved, and when the model obtained by training is adopted to process the images with the complex background, the stability of the model can be enhanced, so that the output result of the model is more reliable.
Example four:
corresponding to the above method embodiment, this embodiment provides a data generating apparatus, referring to a schematic structural diagram of a data generating apparatus shown in fig. 4, where the apparatus includes:
an element obtaining module 41, configured to obtain a background image and obtain a plurality of example graphs of a target object;
a position determining module 42, configured to add the multiple example graphs to the background image, determine positions of the multiple example graphs, and enable a distance between any two example graphs in the multiple example graphs to be within a preset range;
and a data generating module 43, configured to synthesize the multiple instance graphs and the background image into a training image according to the determined position.
In an alternative embodiment, the element obtaining module 41 may further be configured to: extracting a first image not containing a target object from a pre-acquired training data set as a background image; extracting at least one second image containing a target object from the training data set; segmenting a plurality of instance maps of the target object from the at least one second image.
In another alternative embodiment, the element obtaining module 41 may further be configured to: extracting a first image not containing a target object from a pre-acquired training data set as a background image to generate a candidate background set; extracting a second image containing a target object from the training data set, segmenting an example graph of the target object from the second image, and generating a candidate example set; selecting a background image from the candidate background set, selecting a plurality of example graphs from the candidate example set, and correspondingly forming example background pairs by the selected background image and the example graphs; the example graph and the background image in the example background pair are used to synthesize a training image. The element obtaining module 41 may be further configured to: scaling the instance graph segmented from the second image; and correspondingly forming an example background pair by the selected background image, the example graph segmented from the second image and the scaled example graph.
Optionally, the location determination module 42 may also be configured to: randomly adding the multiple instance graphs to any position of the background image; determining a proximity penalty value from a location of each instance graph of the plurality of instance graphs on the background image; adjusting a position of the instance map on the background image according to the proximity loss value.
The location determination module 42 may also be configured to: respectively determining an integral tension loss value and a finishing thrust loss value according to the position of each example graph on the background image; and determining the adjacent loss value according to the integral tension loss value and the integral thrust loss value.
The location determination module 42 may also be configured to: taking each example graph on the background image as a current example graph one by one, and calculating the distance between the current example graph and each example graph on the background image; taking the example graph closest to the current example graph as an adjacent example graph of the current example graph, and combining the current example graph and the adjacent example graph into a group of similar example graph pairs; all the obtained similar example graph pairs form an example graph pair set; determining a tension loss value of each group of similar example graph pairs in the example graph pair set; and taking the sum of the tension loss values of the graph pairs of each group of similar examples as the integral tension loss value. And for: combining any two example graphs on the background image into a group of example graph pairs to obtain a plurality of groups of example graph pairs; selecting non-close example graph pairs out of the example graph pair set from the plurality of groups of example graph pairs, and calculating the thrust loss value of each group of non-close example graph pairs; and taking the sum of the thrust loss values of each group of the similar example graph pairs as the integral thrust loss value. And for: and multiplying the integral tension loss value by a preset coefficient, and adding the integral thrust loss value to obtain the adjacent loss value.
The location determination module 42 may also be configured to: determining an occluded area ratio for each instance graph on the background image; the ratio of the blocked area is the ratio of the area of the blocked part in the example graph to the total area of the example graph; and deleting the example graph with the shielded area ratio larger than the set threshold value. And for: setting a label of an occluded portion of the instance graph on the background image as an occlusion.
The embodiment of the invention provides a data generation device, which increases the data volume of a training data set by adding an example graph of a target object into a background image and synthesizing a training image, can improve the diversity of training data, is favorable for improving the training effect of a model and enhances the stability of the model obtained by training.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
The embodiment of the invention also provides electronic equipment which comprises a memory and a processor. The memory stores a computer program that can be run on the processor, and the processor executes the computer program to implement the method described in the foregoing method embodiment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the electronic device described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, this embodiment also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method provided in the foregoing method embodiment are executed.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (13)

1. A method of generating data, comprising:
acquiring a background image and acquiring a plurality of example graphs of a target object;
adding the multiple example graphs into the background image, determining the positions of the multiple example graphs, and enabling the distance between any two example graphs in the multiple example graphs to be within a preset range;
synthesizing the multiple example graphs and the background image into a training image according to the determined position;
adding the multiple example graphs into the background image, and determining the positions of the multiple example graphs, wherein the step comprises the following steps:
randomly adding the multiple instance graphs to any position of the background image;
determining a proximity penalty value from a location of each instance graph of the plurality of instance graphs on the background image;
adjusting a position of the instance map on the background image according to the proximity loss value.
2. The method of claim 1, wherein the step of obtaining a background image comprises:
extracting a first image not containing a target object from a pre-acquired training data set as a background image;
the method comprises the following steps of obtaining a plurality of example graphs of a target object, wherein the steps comprise:
extracting at least one second image containing a target object from the training data set;
segmenting a plurality of instance maps of the target object from the at least one second image.
3. The method of claim 1, wherein the step of obtaining a background image and obtaining multiple instances of a target object comprises:
extracting a first image not containing a target object from a pre-acquired training data set as a background image to generate a candidate background set;
extracting a second image containing a target object from the training data set, segmenting an example graph of the target object from the second image, and generating a candidate example set;
selecting a background image from the candidate background set, selecting a plurality of example graphs from the candidate example set, and correspondingly forming example background pairs by the selected background image and the example graphs; the example graph and the background image in the example background pair are used to synthesize a training image.
4. The method of claim 3, wherein the step of correspondingly combining the selected background image and the plurality of instance graphs into an instance background pair comprises:
scaling the instance graph segmented from the second image;
and correspondingly forming an example background pair by the selected background image, the example graph segmented from the second image and the scaled example graph.
5. The method of claim 1, wherein the step of determining a proximity penalty value based on a location of each of the plurality of instance graphs on the background image comprises:
respectively determining an integral tension loss value and an integral thrust loss value according to the position of each example graph on the background image;
and determining the adjacent loss value according to the integral tension loss value and the integral thrust loss value.
6. The method of claim 5, wherein the step of determining an overall tension loss value based on the location of each instance map on the background image comprises:
taking each example graph on the background image as a current example graph one by one, and calculating the distance between the current example graph and each other example graph on the background image; taking the example graph closest to the current example graph as an adjacent example graph of the current example graph, and combining the current example graph and the adjacent example graph into a group of similar example graph pairs;
all the obtained similar example graph pairs form an example graph pair set;
determining a tension loss value of each group of similar example graph pairs in the example graph pair set;
and taking the sum of the tension loss values of the graph pairs of each group of similar examples as the integral tension loss value.
7. The method of claim 6, wherein the step of determining an overall thrust loss value based on the location of each instance map on the background image comprises:
combining any two example graphs on the background image into a group of example graph pairs to obtain a plurality of groups of example graph pairs;
selecting non-close example graph pairs out of the example graph pair set from the plurality of groups of example graph pairs, and calculating the thrust loss value of each group of non-close example graph pairs;
and taking the sum of the thrust loss values of each group of non-similar example graph pairs as the integral thrust loss value.
8. The method of claim 5, wherein the step of determining the proximity loss value based on the overall loss of tension value and the overall loss of thrust value comprises:
and multiplying the integral tension loss value by a preset coefficient, and adding the integral thrust loss value to obtain the adjacent loss value.
9. The method of claim 1, wherein the plurality of instance graphs are added to the background image, and wherein after the step of determining the locations of the plurality of instance graphs, the method further comprises:
determining an occluded area ratio for each instance graph on the background image; the ratio of the blocked area is the ratio of the area of the blocked part in the example graph to the total area of the example graph;
and deleting the example graph with the shielded area ratio larger than the set threshold value.
10. The method of claim 1, wherein the plurality of instance graphs are added to the background image, and wherein after the step of determining the locations of the plurality of instance graphs, the method further comprises:
setting a label of an occluded portion of the instance graph on the background image as an occlusion.
11. A data generation apparatus, comprising:
the element acquisition module is used for acquiring a background image and acquiring a plurality of example graphs of a target object;
the position determining module is used for adding the plurality of example graphs into the background image, determining the positions of the plurality of example graphs and enabling the distance between any two example graphs in the plurality of example graphs to be within a preset range;
the data generation module is used for synthesizing the multiple example graphs and the background image into a training image according to the determined position;
the position determining module is further used for randomly adding the multiple example graphs to any position of the background image; determining a proximity penalty value from a location of each instance graph of the plurality of instance graphs on the background image; adjusting a position of the instance map on the background image according to the proximity loss value.
12. An electronic device comprising a memory, a processor;
the memory is stored with a computer program operable on the processor, wherein the processor implements the steps of the method according to any of the claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of the preceding claims 1 to 10.
CN201811523178.9A 2018-12-12 2018-12-12 Data generation method and device and electronic equipment Active CN109583509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811523178.9A CN109583509B (en) 2018-12-12 2018-12-12 Data generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811523178.9A CN109583509B (en) 2018-12-12 2018-12-12 Data generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109583509A CN109583509A (en) 2019-04-05
CN109583509B true CN109583509B (en) 2020-11-03

Family

ID=65928433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811523178.9A Active CN109583509B (en) 2018-12-12 2018-12-12 Data generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109583509B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070540B (en) * 2019-04-28 2023-01-10 腾讯科技(深圳)有限公司 Image generation method and device, computer equipment and storage medium
CN111325767B (en) * 2020-02-17 2023-06-02 杭州电子科技大学 Real scene-based citrus fruit tree image set synthesis method
CN114375460A (en) * 2020-07-31 2022-04-19 华为技术有限公司 Data enhancement method and training method of instance segmentation model and related device
CN113298913A (en) * 2021-06-07 2021-08-24 Oppo广东移动通信有限公司 Data enhancement method and device, electronic equipment and readable storage medium
CN113688887A (en) * 2021-08-13 2021-11-23 百度在线网络技术(北京)有限公司 Training and image recognition method and device of image recognition model
CN115965647A (en) * 2021-10-09 2023-04-14 北京字节跳动网络技术有限公司 Background image generation method, image fusion method, device, electronic equipment and readable medium
CN115082795A (en) * 2022-07-04 2022-09-20 梅卡曼德(北京)机器人科技有限公司 Virtual image generation method, device, equipment, medium and product
WO2024008081A1 (en) * 2022-07-04 2024-01-11 梅卡曼德(北京)机器人科技有限公司 Image generation method and model training method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105593901A (en) * 2013-06-28 2016-05-18 日本电气株式会社 Teaching data generating device, method, and program, and crowd state recognition device, method, and program
CN108257119A (en) * 2018-01-08 2018-07-06 浙江大学 A kind of immediate offshore area floating harmful influence detection method for early warning based near ultraviolet image procossing
CN108305262A (en) * 2017-11-22 2018-07-20 腾讯科技(深圳)有限公司 File scanning method, device and equipment
CN108492343A (en) * 2018-03-28 2018-09-04 东北大学 A kind of image combining method for the training data expanding target identification
CN108876791A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 Image processing method, device and system and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3507773A1 (en) * 2016-09-02 2019-07-10 Artomatix Ltd. Systems and methods for providing convolutional neural network based image synthesis using stable and controllable parametric models, a multiscale synthesis framework and novel network architectures
EP3343432B1 (en) * 2016-12-29 2024-03-20 Elektrobit Automotive GmbH Generating training images for machine learning-based object recognition systems
US10255681B2 (en) * 2017-03-02 2019-04-09 Adobe Inc. Image matting using deep learning
CN108875732B (en) * 2018-01-11 2022-07-12 北京旷视科技有限公司 Model training and instance segmentation method, device and system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105593901A (en) * 2013-06-28 2016-05-18 日本电气株式会社 Teaching data generating device, method, and program, and crowd state recognition device, method, and program
CN108876791A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN108305262A (en) * 2017-11-22 2018-07-20 腾讯科技(深圳)有限公司 File scanning method, device and equipment
CN108257119A (en) * 2018-01-08 2018-07-06 浙江大学 A kind of immediate offshore area floating harmful influence detection method for early warning based near ultraviolet image procossing
CN108492343A (en) * 2018-03-28 2018-09-04 东北大学 A kind of image combining method for the training data expanding target identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Synthesizing Object-Background Data for Large 3-d Datasets;David Breeden 等;《Citeseer》;20120430;1-5 *

Also Published As

Publication number Publication date
CN109583509A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109583509B (en) Data generation method and device and electronic equipment
CN108122234B (en) Convolutional neural network training and video processing method and device and electronic equipment
CN108961303B (en) Image processing method and device, electronic equipment and computer readable medium
US20200026910A1 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN108875931B (en) Neural network training and image processing method, device and system
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
US10726599B2 (en) Realistic augmentation of images and videos with graphics
JP7013489B2 (en) Learning device, live-action image classification device generation system, live-action image classification device generation device, learning method and program
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
CN107644423B (en) Scene segmentation-based video data real-time processing method and device and computing equipment
CN112101344B (en) Video text tracking method and device
CN108885683B (en) Method and system for pose estimation
US11403807B2 (en) Learning hybrid (surface-based and volume-based) shape representation
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
Liu et al. Facial image inpainting using attention-based multi-level generative network
CN111191553A (en) Face tracking method and device and electronic equipment
CN108961314B (en) Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium
CN116802683A (en) Image processing method and system
WO2021179751A1 (en) Image processing method and system
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN117011856A (en) Handwriting skeleton refining method, system, equipment and medium based on deep reinforcement learning
CN107622498B (en) Image crossing processing method and device based on scene segmentation and computing equipment
US20230153965A1 (en) Image processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Data generation methods, devices, and electronic devices

Effective date of registration: 20230404

Granted publication date: 20201103

Pledgee: Shanghai Yunxin Venture Capital Co.,Ltd.

Pledgor: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd.|NANJING KUANGYUN TECHNOLOGY Co.,Ltd.

Registration number: Y2023990000195

PE01 Entry into force of the registration of the contract for pledge of patent right