CN111753922A - Processing method and device for model training label and electronic equipment - Google Patents

Processing method and device for model training label and electronic equipment Download PDF

Info

Publication number
CN111753922A
CN111753922A CN202010622978.7A CN202010622978A CN111753922A CN 111753922 A CN111753922 A CN 111753922A CN 202010622978 A CN202010622978 A CN 202010622978A CN 111753922 A CN111753922 A CN 111753922A
Authority
CN
China
Prior art keywords
image
label
tag
model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010622978.7A
Other languages
Chinese (zh)
Inventor
张印帅
姜馨
张柳新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lenovo Software Ltd
Original Assignee
Beijing Lenovo Software Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lenovo Software Ltd filed Critical Beijing Lenovo Software Ltd
Priority to CN202010622978.7A priority Critical patent/CN111753922A/en
Publication of CN111753922A publication Critical patent/CN111753922A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a processing method and device of a model training label and electronic equipment, wherein the method comprises the following steps: obtaining a first image; obtaining an object label of at least one object in the first image, wherein the object label is used for training a preset teaching training model; outputting the first image containing the object tag to enable the object tag in the first image to be modified; after receiving a label modification operation aiming at the object label, training the teaching training model by using the modified object label to obtain a new teaching training model; wherein the output parameters of the object in the first image match the modified object label.

Description

Processing method and device for model training label and electronic equipment
Technical Field
The application relates to the technical field of intelligent education, in particular to a processing method and device of a model training label and electronic equipment.
Background
With the development of technology, the application of artificial intelligence education is more and more extensive. Teaching about a training model in artificial intelligence education generally comprises four teaching links, such as a labeling link of the training sample, a feature extraction link of the training sample, a training link of the training model and a testing link of the training model.
At present, only the visual teaching aiming at the feature extraction link of a training sample and the training link of a training model is realized, and the visualization of the labeling link of the training sample cannot be realized.
Disclosure of Invention
In view of the above, the present application provides a method and an apparatus for processing a model training tag, and an electronic device, including:
a processing method of model training labels comprises the following steps:
obtaining a first image;
obtaining an object label of at least one object in the first image, wherein the object label is used for training a preset teaching training model;
outputting the first image containing the object tag to enable the object tag in the first image to be modified;
after receiving a label modification operation aiming at the object label, training the teaching training model by using the modified object label to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
The above method, preferably, further comprises:
obtaining a second image;
and inputting the second image into the teaching training model to output image test data, wherein the image test data comprises a test object obtained by processing the second image by the teaching training model and object attributes of the test object.
In the method, preferably, the output parameter of the test object is matched with the object attribute.
In the method, preferably, the object in the first image has a tag modification identifier, and the tag modification identifier is used to prompt that the object is in a state where tag modification can be performed.
In the method, preferably, the object in the first image has a label modification control, and the label modification control is configured to receive a label modification operation for an object label of the object.
The above method, preferably, obtaining an object label of at least one object in the first image, includes:
performing image recognition on the first image to obtain at least one object in the first image;
an object tag of the object is obtained.
A model training tag processing apparatus, comprising:
an image obtaining unit for obtaining a first image;
the label obtaining unit is used for obtaining an object label of at least one object in the first image, and the object label is used for training a preset teaching training model;
an image output unit configured to output the first image including the object tag so that the object tag in the first image can be modified;
an operation receiving unit configured to receive a tag modification operation for the object tag;
the model training unit is used for training the teaching training model by using the modified object label to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
The above apparatus, preferably, the image obtaining unit is further configured to obtain a second image, wherein the apparatus further includes:
and the model testing unit is used for inputting the second image into the teaching training model so as to output image testing data, wherein the image testing data comprises a testing object obtained by processing the second image by the teaching training model and the object attribute of the testing object.
The above apparatus, preferably, the label obtaining unit is specifically configured to: performing image recognition on the first image to obtain at least one object in the first image; an object tag of the object is obtained.
An electronic device, comprising:
a display;
a processor for obtaining a first image; obtaining an object label of at least one object in the first image, wherein the object label is used for training a preset teaching training model; outputting, by the display, the first image including the object tag to enable the object tag in the first image to be modified;
the processor is further configured to receive a tag modification operation for the object tag, and train the teaching training model by using the modified object tag to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
According to the scheme, after the first image is obtained and the object labels of the objects included in the first image are obtained, the object labels capable of training the teaching training model are output to enable the object labels to be modified, and after the label modification operation for the object labels is received, the teaching training model can be trained by using the modified object labels to obtain a new teaching training model, and the output parameters of the objects in the first image correspond to the modified object labels. Therefore, the first image containing the object label participating in the model training is output, so that the teaching experience that the object label participating in the model training can be modified is provided for the user, the user can visually experience the modification of the object label and the training process of the model, and the learning experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a processing method for model training labels according to an embodiment of the present disclosure;
FIGS. 2-3 are exemplary diagrams of embodiments of the present application, respectively;
FIG. 4 is a partial flowchart of a processing method for model training labels according to an embodiment of the present disclosure;
FIGS. 5-9 are diagrams of another example of an embodiment of the present application, respectively;
fig. 10 is a schematic structural diagram of a processing apparatus for model training labels according to a second embodiment of the present application;
fig. 11 is another schematic structural diagram of a processing apparatus for model training labels according to a second embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of an implementation of a processing method for model training labels according to an embodiment of the present application is provided, where the method may be applied to an electronic device capable of performing image processing and input/output processing, such as a computer or a server having an input/output device. The technical scheme in the embodiment is mainly used for processing the visual model training labels so as to improve the learning experience of the user.
Specifically, the method in this embodiment may specifically include the following steps:
step 101: a first image is obtained.
The first image may be a sample image participating in training of the teaching training model, and the first image may include one or more objects, such as objects of flowers, animals, characters, weapons, and the like, which are used to train the teaching training model, so that the teaching training model can perform corresponding processing on the objects included in the test image, such as object classification.
Step 102: an object label of at least one object in the first image is obtained.
In this embodiment, the first image may be subjected to image recognition processing to obtain one or more objects in the first image, for example, petal feature recognition is performed on the first image to recognize a flower object in the first image, or face feature recognition is performed on the first image to recognize a person object in the first image, and so on; then, for these objects, an object tag for each object is obtained, for example, any one or any plurality of a number of petals tag, a petal color tag, a petal size tag, and the like of a flower object is obtained, and for example, any one or any plurality of a face tag, a height tag, a body type tag, an action tag, and the like of a person object is obtained.
It should be noted that the object labels herein can be used for training a preset teaching training model. When the first image is used as a training sample of the teaching training model, the object label of the object contained in the first image can participate in the training of the teaching training model, so that the model parameters of the teaching training model are optimized. The teaching training model may be a model that can process the object in the test image, and for example, after the teaching training model constructed based on the classification algorithm is trained using a petal number label, a petal color label, a petal size label, and the like of the flower object, the teaching training model can process the object included in the test image according to the model parameters learned by the training, and further classify the object included in the test image to obtain the object type of each object in the test image, such as whether the object is a flower object of a specific type.
Step 103: the first image containing the object tag is output such that the object tag in the first image can be modified.
In this embodiment, the first image including the object tag may be output through an interactive interface of the output device, so that a user may modify tag parameters of the output object tag, for example, modify any one or more of tags of a flower object, such as the number of petals, a color of petals, a size of petals, and the like, and thus after the object tag in the first image is modified, the corresponding output parameter in the first image matches the modified object tag.
For example, the first image includes a flower object, the flower object has a flower petal number label, a flower petal color label, and a flower petal size label, and after the first image is output, the user can intuitively know the object label of the flower object in the first image, and at this time, the user can modify the object label of the flower object.
As shown in fig. 2, the color of the petals of the flower object a is changed from original pink to blue, and at the same time, the output color of the flower object a in the first image is also changed to blue, so that the user can intuitively feel the effect of the change on the object label of the flower object a.
As shown in fig. 3, the user modifies the color of the petals of flower object a from the original pink color to blue and the number of petals of flower object a from the original 4 to 5, and the user also modifies the color of the petals of flower object B from the original pink color to green, and at the same time, the output color of flower object a in the first image also becomes blue and the number of output petals is 5, and the output color of flower object B becomes green, and the user can intuitively feel the effect of modifying the object label of the flower object.
Specifically, in this embodiment, the user may modify the object tag in the first image output on the output device through an input device, such as a mouse, a keyboard, or a touch screen.
Step 104: a tag modification operation for an object tag is received.
In this embodiment, a tag modification operation for one or more object tags may be received through an interface with the input device.
It should be noted that the object tag corresponding to the tag modification operation herein may be an object tag belonging to the same object, or may also be an object tag for different objects. For example, the label modification operation as shown in fig. 2 is a modification operation for a petal color label of one flower object a; the label modification operation as shown again in fig. 3 is a label modification operation for a flower object a and a flower object B, and is a label modification operation for a petal color label and a petal number label of the flower object a, and a modification operation for a petal color label of the flower object B.
Step 105: and training the teaching training model by using the modified object label to obtain a new teaching training model.
In this embodiment, after receiving the tag modification operation for the object tag, the modified object tag can be used to model the teaching training model to obtain a new teaching training model, so that the user can visually experience the teaching process of the training of the teaching model participated in by the modified object tag, thereby realizing the tag modification visual teaching of the model and improving the learning experience of the user.
As can be seen from the foregoing solution, in the processing method for model training labels provided in this embodiment of the present application, after the first image is obtained and the object labels of the objects included in the first image are obtained, the object labels that can train the teaching training model are output, so that the object labels can be modified, and further after the label modification operation for the object labels is received, the teaching training model can be trained by using the modified object labels, so as to obtain a new teaching training model, and the output parameters of the objects in the first image correspond to the modified object labels. Therefore, in the embodiment, the first image including the object label participating in the model training is output, so that the teaching experience that the object label participating in the model training can be modified is provided for the user, the user can intuitively experience the modification of the object label and the training process of the model, and the learning experience of the user is improved.
In an implementation manner, in this embodiment, a visual learning experience of a model test may also be provided for a user, and specifically, the method in this embodiment may further include the following steps, as shown in fig. 4:
step 106: a second image is obtained.
The second image may be an image that needs to be processed, such as a test image that needs to classify an object therein, and is used to test a model training effect of the teaching training model. The second image may include one or more objects with unknown object attributes, such as flowers of unknown color, unknown size, unknown type of animal, unknown identity of person, unknown model or serial weapon, which need to be processed by a teaching training model to obtain the object in the second image and the object attributes of the corresponding objects.
Step 107: the second image is input to a teaching training model to output image test data.
The image test data comprises a test object obtained by processing the second image by the teaching training model and object attributes of the test object.
It should be noted that, in this embodiment, after the second image is obtained, the second image including the unknown object attribute is input into the teaching training model, so that the teaching training model trained by the original and modified object labels performs image learning on the second image through the model parameters obtained through learning, so as to obtain each test object in the second image, and obtain the object attribute of each test object.
For example, the petal color label of flower object a in the first image is pink, based on which training is performed in the teaching training model using training data in which the petal color label of flower object a is pink, after which a new teaching training model obtains one or more test objects X in the second image having the same or similar object characteristics as flower object a, and obtains the petal color of test object X as pink, as shown in fig. 5.
Based on this, the visual learning experience of the model test can be provided for the user in the embodiment, and the user can intuitively experience the training effect of the model after training.
Further, step 107 in this embodiment may be executed after step 105, that is, in combination with the before visualization of modification of the model label, in this embodiment, the teaching training model may be trained by the object label modified by the output first image, and then the second image is input into the new teaching training model as the test image, so that the user may visually experience the modification of the model label and experience the training effect of the model trained by the modified label, thereby further improving the user learning experience.
For example, the user modifies the petal color label of flower object a in the output first image to blue, based on which training is performed in the teaching training model using training data for which the petal color label of flower object a is blue, after which the new teaching training model obtains one or more test objects X in the second image having the same or similar object characteristics as flower object a and obtains that the petal color of test object X is blue, as shown in fig. 6.
As another example, the user modifies the petal color label of flower object a in the outputted first image to blue, modifies the number of petals of flower object a in the outputted first image to 5, modifies the petal color of flower object B in the outputted first image to green, based on which training is performed in the teaching training model using training data in which the petal color label of flower object a is blue, the number of petals of flower object a is 5, and the petal color of flower object B is green, after which the new teaching model obtains one or more test objects C in the second image having the same or similar object characteristics as flower object a, also obtains the petal color of test object C as blue and the number of petals as 5, the new teaching model also obtains one or more test objects D having the same or similar object characteristics as flower object B, the petal color of test subject D was also obtained as green, as shown in fig. 7.
Optionally, in this embodiment, the second image may also be output, and the output parameter of the test object in the output second image is matched with the object attribute. As shown in fig. 6, the petal color of one or more test objects in the second image that have the same or similar object characteristics as flower object a is blue; as shown in fig. 7, the color of the petals of the test subject C in the second image was blue and the number of petals was 5, and the color of the petals of the test subject D in the second image was green.
Based on this, the user can intuitively experience the training effect of the model after training, and further, the user can intuitively experience the training effect of a new model trained by using the modified object label.
In one implementation, the object in the outputted first image has a tag modification flag for prompting the object to be in a state capable of tag modification. In a specific implementation, the label modification identifier may be identified by a wire frame or a region, such as an identifier of a square wire frame or a circular highlighted region, and the label modification identifier is output on a region or a position on the first image, where the region or the position is associated with the corresponding object, so as to represent that the corresponding object can be modified by the label.
As shown in fig. 8, the label modification identifier of flower object a may be represented by a square-shaped wire frame, which is highlighted when the mouse is suspended in the area corresponding to flower object a to indicate that flower object a is in a state in which the label is modifiable. It should be noted that when the mouse leaves the area corresponding to the flower object a, the box frame may be hidden so as to avoid affecting the operation of modifying the object tags of other objects on the first image by the user.
Based on the above implementation, the object in the first image also has a label modification control to receive a label modification operation for the object label of the object. In specific implementation, the label modification control may be implemented by using a menu option, or may be implemented by using an input box, so that a user may modify a label of a corresponding object label on the label modification control, and thus, in this embodiment, a label modification operation for the object label may be received through a control interface corresponding to the label modification control.
As shown in fig. 9, when the mouse is suspended in the area corresponding to the flower object a, the square-shaped wire frame is highlighted to indicate that the flower object a is in a state in which the label can be modified, and three label modification controls corresponding to the flower object a are output at the same time, for example, a label modification control corresponding to the flower petal color label, a label modification control corresponding to the petal number label, and a label modification control corresponding to the petal size label, the user may select the petal color parameter in a pull-down menu of the label modification control corresponding to the petal color label, may input the petal number parameter in an input frame of the label modification control corresponding to the petal number label, and may input the petal size in an input frame of the label modification control corresponding to the petal size label. It should be noted that when the mouse leaves the area corresponding to the flower object a, both the box frame and the label modification control may be hidden, so as to avoid affecting the modification operation of the user on the object labels of other objects in the first image.
Referring to fig. 10, a schematic structural diagram of a processing apparatus for model training labels according to a second embodiment of the present application is provided, where the apparatus may be applied to an electronic device capable of performing image processing and input/output processing, such as a computer or a server having an input/output device. The technical scheme in the embodiment is mainly used for processing the visual model training labels so as to improve the learning experience of the user.
Specifically, the apparatus in this embodiment may specifically include the following units:
an image obtaining unit 1001 for obtaining a first image;
a label obtaining unit 1002, configured to obtain an object label of at least one object in the first image, where the object label is used to train a preset teaching training model;
an image output unit 1003 for outputting a first image containing an object tag so that the object tag in the first image can be modified; wherein the output parameters of the object in the first image match the modified object label.
An operation receiving unit 1004 for receiving a tag modification operation for an object tag;
a model training unit 1005, configured to train the teaching training model with the modified object label to obtain a new teaching training model;
as can be seen from the above solution, in the processing apparatus for model training labels provided in the second embodiment of the present application, after the first image is obtained and the object labels of the objects included in the first image are obtained, the object labels that can train the teaching training model are output, so that the object labels can be modified, and further after the label modification operation for the object labels is received, the teaching training model can be trained by using the modified object labels, so as to obtain a new teaching training model, and the output parameters of the objects in the first image correspond to the modified object labels. Therefore, in the embodiment, the first image including the object label participating in the model training is output, so that the teaching experience that the object label participating in the model training can be modified is provided for the user, the user can intuitively experience the modification of the object label and the training process of the model, and the learning experience of the user is improved.
In one implementation, the image obtaining unit 1001 is further configured to obtain a second image, wherein the apparatus in this embodiment may further include the following units, as shown in fig. 11:
the model testing unit 1006 is configured to input the second image into the teaching training model to output image testing data, where the image testing data includes a test object obtained by processing the second image by the teaching training model and an object attribute of the test object. Wherein, the output parameter of the test object is matched with the object attribute.
In one implementation, the object in the first image has a tag modification flag, and the tag modification flag is used to prompt the object to be in a state capable of tag modification.
Based on this, the object in the first image also has a label modification control for receiving a label modification operation for the object label of the object.
In one implementation, the tag obtaining unit 1002 is specifically configured to: performing image recognition on the first image to obtain at least one object in the first image; an object tag of the object is obtained.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding description in the foregoing, and details are not described here.
Referring to fig. 12, a schematic structural diagram of an electronic device according to a third embodiment of the present disclosure is shown, where the electronic device may be an electronic device capable of performing image processing and input/output processing, such as a computer or a server having an input/output device. The technical scheme in the embodiment is mainly used for processing the visual model training labels so as to improve the learning experience of the user.
Specifically, the electronic device in this embodiment may specifically include the following structure:
a display 1201;
a processor 1202 for obtaining a first image; obtaining an object label of at least one object in the first image, wherein the object label is used for training a preset teaching training model; outputting, by the display 1201, the first image including the object tag to enable the object tag in the first image to be modified;
the processor 1202 is further configured to receive a tag modification operation for the object tag, and train the teaching training model with the modified object tag to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
It should be noted that the electronic device in this embodiment may further include a memory for storing an application program that can implement the functions of the processor 1202 and data generated by the application program executed by the processor 1202. In addition, the memory also contains the program content of the teaching and training model.
The electronic device in this embodiment may further include an input component such as a mouse and a keyboard, or an input component such as a touch screen integrated on a display in the electronic device. An input component in the electronic device can obtain a tag modification operation to the user and transmit the tag modification operation to the processor 1202.
As can be seen from the foregoing solution, in the electronic device provided in the third embodiment of the present application, after the first image is obtained and the object labels of the objects included in the first image are obtained, the object labels that can be used for training the teaching training model are output, so that the object labels can be modified, and further after the label modification operation for the object labels is received, the teaching training model can be trained by using the modified object labels, so as to obtain a new teaching training model, and the output parameters of the objects in the first image correspond to the modified object labels. Therefore, in the embodiment, the first image including the object label participating in the model training is output, so that the teaching experience that the object label participating in the model training can be modified is provided for the user, the user can intuitively experience the modification of the object label and the training process of the model, and the learning experience of the user is improved.
In the practical application of model training teaching, the technical scheme in the embodiment can perform visual representation on the label modification link and the verification test link of the model training sample by providing an interactive scene combined with the artificial intelligence teaching training model, so that the visualization of the whole artificial intelligence teaching process is opened. Taking the model of the unmanned supermarket as an example, in the embodiment, the label modification and verification test results in the image of the unmanned supermarket are both visually displayed, so that teaching is performed, and the learning experience of the user is improved.
It should be noted that the images referred to in the foregoing may be two-dimensional images as shown in the drawings in the examples, or may also be three-dimensional images, and technical solutions formed by different types of images or different applicable scenes are within the scope of the present application.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A processing method of model training labels comprises the following steps:
obtaining a first image;
obtaining an object label of at least one object in the first image, wherein the object label is used for training a preset teaching training model;
outputting the first image containing the object tag to enable the object tag in the first image to be modified;
after receiving a label modification operation aiming at the object label, training the teaching training model by using the modified object label to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
2. The method of claim 1, further comprising:
obtaining a second image;
and inputting the second image into the teaching training model to output image test data, wherein the image test data comprises a test object obtained by processing the second image by the teaching training model and object attributes of the test object.
3. The method of claim 2, the output parameters of the test object are matched to its object properties.
4. The method of claim 1 or 2, wherein the object in the first image has a tag modification identifier, and the tag modification identifier is used for prompting that the object is in a state capable of tag modification.
5. The method of claim 4, an object in the first image having a label modification control to receive a label modification operation for an object label of the object.
6. The method of claim 1 or 2, obtaining an object label of at least one object in the first image, comprising:
performing image recognition on the first image to obtain at least one object in the first image;
an object tag of the object is obtained.
7. A model training tag processing apparatus, comprising:
an image obtaining unit for obtaining a first image;
the label obtaining unit is used for obtaining an object label of at least one object in the first image, and the object label is used for training a preset teaching training model;
an image output unit configured to output the first image including the object tag so that the object tag in the first image can be modified;
an operation receiving unit configured to receive a tag modification operation for the object tag;
the model training unit is used for training the teaching training model by using the modified object label to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
8. The apparatus of claim 7, the image obtaining unit further configured to obtain a second image, wherein the apparatus further comprises:
and the model testing unit is used for inputting the second image into the teaching training model so as to output image testing data, wherein the image testing data comprises a testing object obtained by processing the second image by the teaching training model and the object attribute of the testing object.
9. The apparatus of claim 7, the tag obtaining unit to: performing image recognition on the first image to obtain at least one object in the first image; an object tag of the object is obtained.
10. An electronic device, comprising:
a display;
a processor for obtaining a first image; obtaining an object label of at least one object in the first image, wherein the object label is used for training a preset teaching training model; outputting, by the display, the first image including the object tag to enable the object tag in the first image to be modified;
the processor is further configured to receive a tag modification operation for the object tag, and train the teaching training model by using the modified object tag to obtain a new teaching training model;
wherein the output parameters of the object in the first image match the modified object label.
CN202010622978.7A 2020-06-30 2020-06-30 Processing method and device for model training label and electronic equipment Pending CN111753922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010622978.7A CN111753922A (en) 2020-06-30 2020-06-30 Processing method and device for model training label and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010622978.7A CN111753922A (en) 2020-06-30 2020-06-30 Processing method and device for model training label and electronic equipment

Publications (1)

Publication Number Publication Date
CN111753922A true CN111753922A (en) 2020-10-09

Family

ID=72680309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010622978.7A Pending CN111753922A (en) 2020-06-30 2020-06-30 Processing method and device for model training label and electronic equipment

Country Status (1)

Country Link
CN (1) CN111753922A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019101535A (en) * 2017-11-29 2019-06-24 コニカミノルタ株式会社 Teacher data preparation device and method thereof and image segmentation device and method thereof
CN110532487A (en) * 2019-09-11 2019-12-03 北京百度网讯科技有限公司 The generation method and device of label
CN110865756A (en) * 2019-11-12 2020-03-06 苏州智加科技有限公司 Image labeling method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019101535A (en) * 2017-11-29 2019-06-24 コニカミノルタ株式会社 Teacher data preparation device and method thereof and image segmentation device and method thereof
CN110532487A (en) * 2019-09-11 2019-12-03 北京百度网讯科技有限公司 The generation method and device of label
CN110865756A (en) * 2019-11-12 2020-03-06 苏州智加科技有限公司 Image labeling method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111260545B (en) Method and device for generating image
US10204216B2 (en) Verification methods and verification devices
CN109637207B (en) Preschool education interactive teaching device and teaching method
US20220383649A1 (en) System and method for facilitating graphic-recognition training of a recognition model
CN108236784B (en) Model training method and device, storage medium and electronic device
CN110418095B (en) Virtual scene processing method and device, electronic equipment and storage medium
US11068746B2 (en) Image realism predictor
CN110162164A (en) A kind of learning interaction method, apparatus and storage medium based on augmented reality
CN108229485A (en) For testing the method and apparatus of user interface
CN111494946B (en) Image processing method, device, equipment and computer readable storage medium
CN108921138B (en) Method and apparatus for generating information
CN110414001A (en) Sentence generation method and device, storage medium and electronic device
CN112637692B (en) Interaction method, device and equipment
Khortiuk et al. Scoring System Based on Neural Networks for Identification of Factors in Image Perception.
CN115660909B (en) Digital school platform immersion type digital learning method and system
CN111753922A (en) Processing method and device for model training label and electronic equipment
CN112307176A (en) Method and device for guiding user to write
Winschiers-Goagoses et al. Design democratization with communities: Drawing toward locally meaningful design
CN116363245A (en) Virtual face generation method, virtual face live broadcast method and device
Stanescu et al. State-Aware Configuration Detection for Augmented Reality Step-by-Step Tutorials
CN113761281A (en) Virtual resource processing method, device, medium and electronic equipment
CN110837549B (en) Information processing method, device and storage medium
CN113569167A (en) Resource processing method and device, terminal equipment and storage medium
CN106448311B (en) Method for realizing corresponding relation sequence of options and test questions in 3D question bank
CN111655148A (en) Heart type analysis method based on augmented reality and intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination