CN110058756A - A kind of mask method and device of image pattern - Google Patents
A kind of mask method and device of image pattern Download PDFInfo
- Publication number
- CN110058756A CN110058756A CN201910319246.8A CN201910319246A CN110058756A CN 110058756 A CN110058756 A CN 110058756A CN 201910319246 A CN201910319246 A CN 201910319246A CN 110058756 A CN110058756 A CN 110058756A
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- classification
- callout box
- location information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
- G06F16/162—Delete operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This application provides a kind of mask method of image pattern and devices, first obtain the location information of callout box, for any one preset classification, the frame for marking circle choosing is selected to the image-erasing that the classification is not belonging in image, the frame of the classification is selected into image and corresponding location information again, labeled data as the classification, it can be seen that, the deletion of image is selected to carry out decoupling processing the acquisition of callout box and frame, with after obtaining frame using a callout box and selecting image, manually inputting the frame selects the mode of the type of image to compare, multiple frames can be selected image to carry out batch delete processing, it can also be the step of regular hour point executes the location information for obtaining callout box parallel and frame selects the screening of image, therefore, compared with traditional notation methods, efficiency can be significantly improved.
Description
Technical field
This application involves artificial intelligence field more particularly to the mask methods and device of a kind of image pattern.
Background technique
Image object detection technique based on deep learning is increasingly mature, drives in intelligent retail, intelligent monitoring, intelligence
It sails, the numerous areas such as intelligent medical are applied, and play powerful technology efficiency.Image object detection based on deep learning
Model needs to be trained using the image pattern largely marked, it is therefore desirable to put into a large amount of manpower and carry out image pattern
Mark.
By taking Fig. 1 as an example, traditional notation methods are supported manually to select target using mark circle on image pattern, and are remembered
The location information for recording callout box pops up class option, the mesh is manually selected in class option after artificial frame selects a target
Classification belonging to mark, completing the mark of a target, (character in Fig. 1 is the tool key in existing marking software, here not
Function is repeated again).As it can be seen that complete the mark of all targets in Fig. 1 it is necessary to repeat above-mentioned annotation process.
Therefore, the efficiency for how improving mark becomes current urgent problem to be solved.
Summary of the invention
This application provides a kind of mask method of image pattern and devices, it is therefore intended that solves how to improve image pattern
The problem of efficiency of mark.
To achieve the goals above, this application provides following technical schemes:
A kind of mask method of image pattern, comprising:
The location information of callout box is obtained, the callout box is used to select target in described image sample upper ledge, obtains frame choosing
Image;
For any one preset classification, the frame is selected to the image-erasing for being not belonging to the classification in image, is somebody's turn to do
The frame of classification selects image;
For any one preset classification, the frame of the classification is selected into image and corresponding location information, as the classification
Labeled data, wherein any one frame selects the corresponding location information of image are as follows: obtains the position that the frame selects the callout box of image
Information.
Optionally, the location information for obtaining callout box includes:
The location information that training in advance obtains the callout box of frame modeling type output is obtained, the frame modeling type is used for
Described image sample upper ledge selects target;
Alternatively, the adjustment based on the location information manually to the reference callout box of frame modeling type output operates, obtain
The location information of the callout box.
Optionally, the training process of the frame modeling type includes:
Based on the frame selection operation manually on first image pattern, the callout box of artificial frame choosing is obtained;
Use the callout box of the artificial frame choosing, the preset frame modeling type of training;
Using new image pattern as the input of the trained frame modeling type, the frame modeling type output is obtained
The callout box of the new image pattern;
In the case where the callout box of the output of the frame modeling type described in manual amendment, artificial modified callout box, instruction are used
Practice the frame modeling type;
In the callout box of frame modeling type output, the quantity of the callout box through manual amendment is no more than preset threshold
In the case of, complete the training process of frame modeling type.
Optionally, the location information for obtaining callout box includes:
Using the first process, the location information of callout box is obtained;
It is described for any one preset classification, the frame is selected to the image-erasing for being not belonging to the classification in image, is obtained
Frame to the classification selects image, comprising:
The frame, using the second process, the figure that the classification is not belonging in image is selected to for any one preset classification
As deleting, the frame for obtaining the classification selects image;
The method also includes:
In the case where obtaining at least one frame using first process and selecting image, using second process, obtain
The frame of at least one classification selects image, and uses first process, obtains the location information of new callout box.
Optionally, described that the frame is selected to the image-erasing that the classification is not belonging in image, obtain the frame choosing figure of the classification
As including:
Establish the file including the classification;
Image is selected to be put into the file of the classification frame;
Based on the delete operation manually to the image in the file of the classification, the frame is selected and is not belonging to this point in image
The image-erasing of class, the frame for obtaining the classification select image.
A kind of annotation equipment of image pattern, comprising:
Callout box obtains module, and for obtaining the location information of callout box, the callout box is used in described image sample
Upper ledge selects target, obtains frame and selects image;
Removing module, for for any one preset classification, the frame to be selected to the figure for being not belonging to the classification in image
As deleting, the frame for obtaining the classification selects image;
Labeled data obtains module, for for any one preset classification, by the frame of the classification select image with it is corresponding
Location information, the labeled data as the classification, wherein any one frame selects the corresponding location information of image are as follows: is somebody's turn to do
Frame selects the location information of the callout box of image.
Optionally, location information of the callout box acquisition module for obtaining callout box includes:
The callout box obtains module and is specifically used for, and obtains and trains the callout box for obtaining the output of frame modeling type in advance
Location information, the frame modeling type are used to select target in described image sample upper ledge;Alternatively, based on manually to the frame modeling type
The adjustment of the location information of the reference callout box of output operates, and obtains the location information of the callout box.
Optionally, further includes:
Model training module, for obtaining artificial frame choosing based on the frame selection operation manually on first image pattern
Callout box;Use the callout box of the artificial frame choosing, the preset frame modeling type of training;Using new image pattern as by training
The frame modeling type input, obtain the callout box of the new image pattern of frame modeling type output;Manually repairing
In the case where the callout box for changing the frame modeling type output, artificial modified callout box, the training frame modeling type are used;?
It is complete in the case that the quantity of the callout box through manual amendment is not more than preset threshold in the callout box of the frame modeling type output
At the training process of frame modeling type.
Optionally, location information of the callout box acquisition module for obtaining callout box includes:
The callout box obtains module and is specifically used for, and using the first process, obtains the location information of callout box;
The removing module is used to that the frame to be selected and is not belonging to the classification in image by any one preset classification
Image-erasing, the frame for obtaining the classification select the image to include:
The removing module is specifically used for, and the frame is selected and schemed by any one preset classification using the second process
The image-erasing of the classification is not belonging to as in, the frame for obtaining the classification selects image;
Described device further include:
Scheduler module, in the case where obtaining at least one frame using first process and selecting image, using described
Second process, the frame for obtaining at least one classification selects image, and uses first process, obtains the position letter of new callout box
Breath.
Optionally, the removing module is used to that the frame to be selected in image and is not belonging to by any one preset classification
The image-erasing of the classification, the frame for obtaining the classification select the image to include:
The removing module is specifically used for, and establishes the file including the classification;Image is selected to be put into the classification frame
File in;Based on the delete operation manually to the image in the file of the classification, the frame is selected in image and is not belonging to
The image-erasing of the classification, the frame for obtaining the classification select image.
The mask method and device of image pattern described herein first obtain the location information of callout box, for any
The frame for marking circle choosing is selected and is not belonging to the image-erasing of the classification in image by one preset classification, then by the frame of the classification
Image and corresponding location information are selected, the labeled data as the classification, it is seen then that deleting for image is selected into the acquisition of callout box and frame
Except decoupling processing has been carried out, with after obtaining frame using a callout box and selecting image, the class that the frame selects image is manually inputted
The mode of type is compared, and can select multiple frames image to carry out batch delete processing, additionally it is possible to execute parallel in regular hour point
The step of location information and frame for obtaining callout box select the screening of image, therefore, can be significant compared with traditional notation methods
It improves efficiency.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the exemplary diagram for obtaining the mark of image pattern in the prior art;
Fig. 2 is a kind of flow chart of the mask method of image pattern disclosed in the embodiment of the present application;
Fig. 3 is the exemplary diagram of the mask method of image pattern disclosed in the embodiment of the present application;
Fig. 4 is the flow chart of the mask method of another image pattern disclosed in the embodiment of the present application;
Fig. 5 is the structural schematic diagram of the annotation equipment of image pattern disclosed in the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
Fig. 2 is a kind of mask method of image pattern disclosed in the embodiment of the present application, comprising the following steps:
S201: display image pattern.
S202: selecting the operation of target based on the frame manually on image pattern, obtains frame and selects image.
By taking Fig. 1 as an example, the tools such as mouse manually can be used, select target using mark circle on image pattern, in general,
It include a target in the region of one mark circle choosing.The region (including callout box) of any one mark circle choosing, is one
A frame selects image.
S203: for any one preset classification, frame is selected to the image-erasing for being not belonging to the classification in image, is somebody's turn to do
The frame of classification selects image.
Specifically, can establish file, any one file is named as a classification, selects image to be put into money each
In file, it should be noted that in any one file, the institute being put into is framed to select image.
User can open any one file, will not belong to this document folder name referring classification (i.e. this document folder pair
The classification answered) frame select image-erasing.It is based on artificial delete operation, the frame of any one file is selected in image and is not belonging to
This document presss from both sides the image-erasing of corresponding classification, and the frame for obtaining the classification selects image.
S204: for any one preset classification, the frame of the classification is selected into image and corresponding location information, as this
The labeled data of classification.
Wherein, any one frame selects the corresponding location information of image are as follows: obtains the position letter that the frame selects the callout box of image
Breath.In general, the location information of a callout box includes the top left co-ordinate and bottom right angular coordinate of the callout box.
Fig. 3 is to use process shown in Fig. 2, obtains the example of labeled data:
Show image pattern, the target in image pattern includes and sack.User uses existing tool (left in such as Fig. 3
Shown in sidebar, which is not described herein again), callout box is selected in image pattern upper ledge, any one mark circle selects a target.
The image that frame is selected is placed into the file just established in advance, and in Fig. 3, a file is named as " bottle ",
Another file is named as " sack ", respectively indicates the corresponding classification of file.It should be noted that can obtain arbitrarily
After one or a part of frame select image, selects image to be put into file obtained frame, image can also be selected obtaining whole frames
Afterwards, image is selected to be put into file obtained frame.
User deletes the frame choosing figure for being not belonging to " bottle " (i.e. " sack "), is belonged to " bottle in " bottle " file
The frame of son " selects figure.After the completion of user deletes, establishes each " bottle " frame choosing figure and obtain the position that the frame selects the callout box of figure
The corresponding relationship of confidence breath, the labeled data that corresponding " bottle " frame choosing figure is classified with location information as " bottle ".
User deletes the frame choosing figure for being not belonging to " sack " (i.e. " bottle "), is belonged to " bag in " sack " file
The frame of son " selects figure.After the completion of user deletes, establishes each " sack " frame choosing figure and obtain the position that the frame selects the callout box of figure
The corresponding relationship of confidence breath, the labeled data that corresponding " sack " frame choosing figure is classified with location information as " sack ".
As it can be seen that the mask method of image pattern described in the present embodiment, first frame selects target, then unifies to select image to carry out frame
Screening is decoupled frame choosing and classification mark, therefore, frame choosing and classification mark can be used different personnel and carry out,
The two can also execute parallel in regular hour point, compared with frame shown in FIG. 1 selects the mode of a mark one, Neng Gouxian
Work improves efficiency.
Also, select image to screen frame using the mode of deletion, it is simple easily to execute.
It in Fig. 2, selects mode to obtain callout box using artificial frame, other than the mode of artificial frame choosing, it also can be used
Its mode obtains callout box, and Fig. 4 is the mask method of another image pattern disclosed in the embodiment of the present application, main with Fig. 2
Difference is, using the callout box training frame modeling type being manually arranged, reuses trained frame modeling type and automatically derives mark
Frame.Fig. 4 the following steps are included:
S401: it based on the frame selection operation manually on first image pattern, obtains callout box and frame selects image.
Wherein, the specific implementation of callout box is obtained are as follows: foreground shows callout box, and backstage records the position letter of callout box
Breath.
S402: selecting image using the frame in first image pattern, the step of according to S203-S204, obtains first figure
In decent, the labeled data of each classification.
S403: the callout box for first image pattern that artificial frame selects, training frame modeling type are used.
S404: new image pattern is inputted into trained frame modeling type, obtains the callout box of new image pattern.
It is artificial to determine callout box whether touch if meet demand by meet demand after the callout box for obtaining model output
Hair executes S406, otherwise, manual amendment's callout box, such as: if trained frame modeling type is in new image pattern subscript
The callout box of note does not frame target completely, then manual tension callout box, so that callout box frames target completely.In manual amendment
After callout box, S405 is executed.
Further, can show interactive interface, prompt the user whether to need to modify, and receive it is not necessary to modify instruction work
For the triggering command of S406, or the location information of the reception modified callout box of user.
S405: the callout box by the callout box after manual amendment, as new image pattern.
S406: selecting image using the corresponding frame of the callout box of new image pattern, the step of according to S203-S204, obtains
In new image pattern, the labeled data of each classification.
Wherein, the corresponding frame of callout box selects the image to be, region that circle lives (including mark is marked in image pattern
Frame).
It should be noted that in the case where executing S405, the callout box of new image pattern, for artificial modified mark
Frame is infused, i.e. the callout box of model output, which is used as, refers to callout box, after manually modifying reference standard frame, obtains new image pattern
Callout box.In the case where being not carried out S405, the callout box of new image pattern, as using new image pattern as defeated
Enter, the callout box of frame modeling type output.
S407: by the callout box after manual amendment, as incremental training data, training frame modeling type.
S408: in the callout box of judgment models output, whether the quantity of the callout box through manual amendment is not more than present count
Value, if so, S409 is executed, if not, returning when there is new image pattern to need to mark and executing S404 and rear afterflow
Journey, i.e., the mode frame exported automatically using model in conjunction with human assistance selects callout box, and continues to train frame modeling type.
S409: the training process of frame modeling type is completed.
So far, frame modeling type automatic marking can be used, without by people in the callout box in subsequent new image pattern
Work.
It should be noted that the quantity of first above-mentioned image pattern can be preset, because of first image pattern
Callout box is selected for artificial frame, therefore, the quantity of first image pattern is usually multiple.
New image pattern refers to, compared with the image pattern marked, there are no be marked and figure to be marked
Decent.
It can be seen that from process shown in Fig. 4 using artificial callout box training frame modeling type, and use artificial modified side
Formula repetitive exercise process exports callout box using frame modeling type automatically, therefore, can be improved annotating efficiency after completing training,
To further increase the acquisition efficiency of labeled data.
Further, in the mask method based on image pattern described in the embodiment of the present application, callout box and sieve are obtained
It selects the frame of each classification to select the decoupling treatment principle of image, can be used the first process, obtain callout box, using the second process,
The frame is selected to the image-erasing that the classification is not belonging in image, the frame for obtaining the classification selects image.In the case, it is using
First process obtains in the case that at least one frame selects image, and using the second process, the frame for obtaining at least one classification selects image.
Therefore, it can be executed parallel in certain time point and obtain the step of frame of callout box and each classification selects image, to further mention
The acquisition efficiency of high labeled data.
Specific to process shown in Fig. 4, the first process can be used and execute S401 or S404, executed using the second process
S402 or S406 executes the model training process of S403, S405, S407 and S408 using third process.Three processes join each other
System is again independent of one another, can execute parallel in certain time point.
Fig. 5 is a kind of annotation equipment of image pattern disclosed in the embodiment of the present application, comprising: callout box obtains module, deletes
It further include optionally model training module and scheduler module except module and labeled data obtain module.
Wherein, callout box obtains the location information that module is used to obtain callout box, and callout box is used in image pattern upper ledge
Target is selected, frame is obtained and selects image.Removing module is used to that frame to be selected and is not belonging to this point in image by any one preset classification
The image-erasing of class, the frame for obtaining the classification select image.Labeled data obtains module and is used for for any one preset classification,
The frame of the classification is selected into image and corresponding location information, the labeled data as the classification, wherein any one frame selects image
Corresponding location information are as follows: obtain the location information that the frame selects the callout box of image.
Further, the callout box obtains the specific implementation that module obtains the location information of callout box are as follows: obtains
Training in advance obtains the location information of the callout box of frame modeling type output, and the frame modeling type is used in described image sample
Upper ledge selects target;Alternatively, the adjustment based on the location information manually to the reference callout box of frame modeling type output operates, obtain
To the location information of the callout box.
Frame is selected the specific implementation that the image-erasing of the classification is not belonging in image by removing module are as follows: establishing includes being somebody's turn to do
The file of classification;Image is selected to be put into the file of the classification frame;Based on manually in the file of the classification
Frame is selected the image-erasing that the classification is not belonging in image by the delete operation of image, and the frame for obtaining the classification selects image.
Model training module is used for: based on the frame selection operation manually on first image pattern, obtaining artificial frame choosing
Callout box;Use the callout box of the artificial frame choosing, the preset frame modeling type of training;Using new image pattern as by training
The frame modeling type input, obtain the callout box of the new image pattern of frame modeling type output;Manually repairing
In the case where the callout box for changing the frame modeling type output, artificial modified callout box, the training frame modeling type are used;?
It is complete in the case that the quantity of the callout box through manual amendment is not more than preset threshold in the callout box of the frame modeling type output
At the training process of frame modeling type.
Further, callout box, which obtains module, can be used the first process, obtain the location information of callout box.Removing module
The second process can be used, for any one preset classification, select the image for being not belonging to the classification in image to delete the frame
It removes, the frame for obtaining the classification selects image.In the case, scheduler module is used to obtain the choosing of at least one frame using the first process
In the case where image, using the second process, the frame for obtaining at least one classification selects image, and uses the first process, obtains new
The location information of callout box, to realize the parallel reality for obtaining frame and image and deletion screening frame being selected to select image within the scope of certain time
It is existing.
Device shown in fig. 5, the sample mark process that can be carried out stage by stage, two stages can be real in certain time point
It is now parallel to carry out.Classification mark working link after fractionation, which is simplified to classification one by one, reduces the screening process of the small figure of target
Mark difficulty.Two stages may be implemented flexibly to configure to the skill requirement and number of mark personnel, reduces mark cost, mentions
High annotating efficiency.
Also, annotation process introduces the use to general target detection model, is instructed by artificial data mark and model
Experienced iteration carries out, and automatically generates pre-selected target coordinate frame using model, indirect labor's mark reduces work difficulty.Work as mould
When the detection accuracy of type reaches a certain level, it can substitute completely manually, complete the mark work of remaining data.
In conclusion device shown in fig. 5, can relatively efficiently realize the mark of image pattern.
If function described in the embodiment of the present application method is realized in the form of SFU software functional unit and as independent production
Product when selling or using, can store in a storage medium readable by a compute device.Based on this understanding, the application is real
The part for applying a part that contributes to existing technology or the technical solution can be embodied in the form of software products,
The software product is stored in a storage medium, including some instructions are used so that a calculating equipment (can be personal meter
Calculation machine, server, mobile computing device or network equipment etc.) execute each embodiment the method for the application whole or portion
Step by step.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), with
Machine accesses various Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk
Matter.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with it is other
The difference of embodiment, same or similar part may refer to each other between each embodiment.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (10)
1. a kind of mask method of image pattern characterized by comprising
The location information of callout box is obtained, the callout box is used to select target in described image sample upper ledge, obtains frame and select image;
For any one preset classification, the frame is selected to the image-erasing for being not belonging to the classification in image, obtains the classification
Frame select image;
For any one preset classification, the frame of the classification is selected into image and corresponding location information, the mark as the classification
Infuse data, wherein any one frame selects the corresponding location information of image are as follows: obtain the position letter that the frame selects the callout box of image
Breath.
2. the method according to claim 1, wherein the location information for obtaining callout box includes:
The location information that training in advance obtains the callout box of frame modeling type output is obtained, the frame modeling type is used for described
Image pattern upper ledge selects target;
Alternatively, the adjustment based on the location information manually to the reference callout box of frame modeling type output operates, obtain described
The location information of callout box.
3. according to the method described in claim 2, it is characterized in that, the training process of the frame modeling type includes:
Based on the frame selection operation manually on first image pattern, the callout box of artificial frame choosing is obtained;
Use the callout box of the artificial frame choosing, the preset frame modeling type of training;
Using new image pattern as the input of the trained frame modeling type, the described of the frame modeling type output is obtained
The callout box of new image pattern;
In the case where the callout box of the output of the frame modeling type described in manual amendment, artificial modified callout box, training institute are used
State frame modeling type;
In the callout box of frame modeling type output, the case where quantity of the callout box through manual amendment is not more than preset threshold
Under, complete the training process of frame modeling type.
4. the method according to claim 1, wherein the location information for obtaining callout box includes:
Using the first process, the location information of callout box is obtained;
It is described for any one preset classification, the frame is selected to the image-erasing for being not belonging to the classification in image, is somebody's turn to do
The frame of classification selects image, comprising:
Using the second process, for any one preset classification, the image for being not belonging to the classification in image is selected to delete the frame
It removes, the frame for obtaining the classification selects image;
The method also includes:
In the case where obtaining at least one frame using first process and selecting image, using second process, obtain at least
The frame of one classification selects image, and uses first process, obtains the location information of new callout box.
5. according to claim 1 or 4 described in any item methods, which is characterized in that described select the frame in image is not belonging to
The image-erasing of the classification, the frame for obtaining the classification select the image to include:
Establish the file including the classification;
Image is selected to be put into the file of the classification frame;
Based on the delete operation manually to the image in the file of the classification, the frame is selected and is not belonging to the classification in image
Image-erasing, the frame for obtaining the classification select image.
6. a kind of annotation equipment of image pattern characterized by comprising
Callout box obtains module, and for obtaining the location information of callout box, the callout box is used in described image sample upper ledge
Target is selected, frame is obtained and selects image;
Removing module, for selecting the image for being not belonging to the classification in image to delete the frame for any one preset classification
It removes, the frame for obtaining the classification selects image;
Labeled data obtains module, for for any one preset classification, the frame of the classification to be selected image and corresponding position
Confidence breath, the labeled data as the classification, wherein any one frame selects the corresponding location information of image are as follows: obtain frame choosing
The location information of the callout box of image.
7. device according to claim 6, which is characterized in that the callout box obtains the position that module is used to obtain callout box
Confidence ceases
The callout box obtains module and is specifically used for, and obtains the position that training in advance obtains the callout box of frame modeling type output
Information, the frame modeling type are used to select target in described image sample upper ledge;Alternatively, based on manually being exported to the frame modeling type
Reference callout box location information adjustment operation, obtain the location information of the callout box.
8. device according to claim 7, which is characterized in that further include:
Model training module, for obtaining the mark of artificial frame choosing based on the frame selection operation manually on first image pattern
Frame;Use the callout box of the artificial frame choosing, the preset frame modeling type of training;Using new image pattern as trained institute
The input for stating frame modeling type obtains the callout box of the new image pattern of the frame modeling type output;In manual amendment institute
In the case where the callout box for stating the output of frame modeling type, artificial modified callout box, the training frame modeling type are used;Described
In the callout box of frame modeling type output, the quantity of the callout box through manual amendment completes frame no more than in the case where preset threshold
The training process of modeling type.
9. device according to claim 6, which is characterized in that the callout box obtains the position that module is used to obtain callout box
Confidence ceases
The callout box obtains module and is specifically used for, and using the first process, obtains the location information of callout box;
The removing module is used to that the frame to be selected to the image for being not belonging to the classification in image for any one preset classification
It deletes, the frame for obtaining the classification selects the image to include:
The removing module is specifically used for, any one preset classification is selected the frame in image using the second process
It is not belonging to the image-erasing of the classification, the frame for obtaining the classification selects image;
Described device further include:
Scheduler module, for using described second in the case where obtaining at least one frame using first process and selecting image
Process, the frame for obtaining at least one classification selects image, and uses first process, obtains the location information of new callout box.
10. according to the described in any item devices of claim 6-9, which is characterized in that the removing module is used for for any one
The frame, is selected the image-erasing that the classification is not belonging in image by a preset classification, and the frame for obtaining the classification selects the image to include:
The removing module is specifically used for, and establishes the file including the classification;Image is selected to be put into the text of the classification frame
In part folder;Based on the delete operation manually to the image in the file of the classification, the frame is selected and is not belonging to this point in image
The image-erasing of class, the frame for obtaining the classification select image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910319246.8A CN110058756B (en) | 2019-04-19 | 2019-04-19 | Image sample labeling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910319246.8A CN110058756B (en) | 2019-04-19 | 2019-04-19 | Image sample labeling method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110058756A true CN110058756A (en) | 2019-07-26 |
CN110058756B CN110058756B (en) | 2021-03-02 |
Family
ID=67319787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910319246.8A Active CN110058756B (en) | 2019-04-19 | 2019-04-19 | Image sample labeling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110058756B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110399514A (en) * | 2019-07-29 | 2019-11-01 | 中国工商银行股份有限公司 | Method and apparatus for being classified to image and being marked |
CN111783635A (en) * | 2020-06-30 | 2020-10-16 | 北京百度网讯科技有限公司 | Image annotation method, device, equipment and storage medium |
CN111914822A (en) * | 2020-07-23 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Text image labeling method and device, computer readable storage medium and equipment |
CN113160209A (en) * | 2021-05-10 | 2021-07-23 | 上海市建筑科学研究院有限公司 | Target marking method and target identification method for building facade damage detection |
CN114003164A (en) * | 2021-10-14 | 2022-02-01 | 中国第一汽车股份有限公司 | Traffic participant position and action labeling method based on natural driving data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008151326A2 (en) * | 2007-06-08 | 2008-12-11 | Microsoft Corporation | Face annotation framework with partial clustering and interactive labeling |
CN105678322A (en) * | 2015-12-31 | 2016-06-15 | 百度在线网络技术(北京)有限公司 | Sample labeling method and apparatus |
CN108537240A (en) * | 2017-03-01 | 2018-09-14 | 华东师范大学 | Commodity image semanteme marking method based on domain body |
CN108921204A (en) * | 2018-06-14 | 2018-11-30 | 平安科技(深圳)有限公司 | Electronic device, picture sample set creation method and computer readable storage medium |
CN108920711A (en) * | 2018-07-25 | 2018-11-30 | 中国人民解放军国防科技大学 | Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide |
CN109446369A (en) * | 2018-09-28 | 2019-03-08 | 武汉中海庭数据技术有限公司 | The exchange method and system of the semi-automatic mark of image |
-
2019
- 2019-04-19 CN CN201910319246.8A patent/CN110058756B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008151326A2 (en) * | 2007-06-08 | 2008-12-11 | Microsoft Corporation | Face annotation framework with partial clustering and interactive labeling |
CN105678322A (en) * | 2015-12-31 | 2016-06-15 | 百度在线网络技术(北京)有限公司 | Sample labeling method and apparatus |
CN108537240A (en) * | 2017-03-01 | 2018-09-14 | 华东师范大学 | Commodity image semanteme marking method based on domain body |
CN108921204A (en) * | 2018-06-14 | 2018-11-30 | 平安科技(深圳)有限公司 | Electronic device, picture sample set creation method and computer readable storage medium |
CN108920711A (en) * | 2018-07-25 | 2018-11-30 | 中国人民解放军国防科技大学 | Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide |
CN109446369A (en) * | 2018-09-28 | 2019-03-08 | 武汉中海庭数据技术有限公司 | The exchange method and system of the semi-automatic mark of image |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110399514A (en) * | 2019-07-29 | 2019-11-01 | 中国工商银行股份有限公司 | Method and apparatus for being classified to image and being marked |
CN110399514B (en) * | 2019-07-29 | 2022-03-29 | 中国工商银行股份有限公司 | Method and device for classifying and labeling images |
CN111783635A (en) * | 2020-06-30 | 2020-10-16 | 北京百度网讯科技有限公司 | Image annotation method, device, equipment and storage medium |
CN111914822A (en) * | 2020-07-23 | 2020-11-10 | 腾讯科技(深圳)有限公司 | Text image labeling method and device, computer readable storage medium and equipment |
CN111914822B (en) * | 2020-07-23 | 2023-11-17 | 腾讯科技(深圳)有限公司 | Text image labeling method, device, computer readable storage medium and equipment |
CN113160209A (en) * | 2021-05-10 | 2021-07-23 | 上海市建筑科学研究院有限公司 | Target marking method and target identification method for building facade damage detection |
CN114003164A (en) * | 2021-10-14 | 2022-02-01 | 中国第一汽车股份有限公司 | Traffic participant position and action labeling method based on natural driving data |
Also Published As
Publication number | Publication date |
---|---|
CN110058756B (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110058756A (en) | A kind of mask method and device of image pattern | |
CN106528395B (en) | The generation method and device of test case | |
CN107818293A (en) | Method and apparatus for handling cloud data | |
CN106469215B (en) | Data importing method and system based on webpage end | |
CN105224463B (en) | A kind of software defect Code location method based on crash stack data | |
CN109241967A (en) | Thyroid ultrasound automatic image recognition system, computer equipment, storage medium based on deep neural network | |
CN107925786A (en) | The data visualization video of animation | |
CN110177122A (en) | A kind of method for establishing model and device identifying network security risk | |
CN105930773A (en) | Motion identification method and device | |
CN109255413A (en) | Test parameter calling system and method | |
CN104866108A (en) | Multifunctional dance experience system | |
CN109840195A (en) | Webpage method for analyzing performance, terminal device and computer readable storage medium | |
CN104156199B (en) | A kind of automatic continuous integrated approach of software and system | |
CN108876790A (en) | Image, semantic dividing method and device, neural network training method and device | |
CN108198072A (en) | A kind of system of artificial intelligence assessment financial product feature | |
CN107133631A (en) | A kind of method and device for recognizing TV station's icon | |
CN106569949A (en) | Method and device used for executing test case | |
CN207099116U (en) | Data acquisition and remote control | |
CN109815224A (en) | Data quality checking and the method and apparatus of cleaning | |
CN107656869A (en) | A kind of method that exclusive automatic test report is built based on JAVA | |
WO2021013871A1 (en) | Computer implemented method, computer program and physical computing environment | |
CN107135402A (en) | A kind of method and device for recognizing TV station's icon | |
JP4770495B2 (en) | Simulation model generator | |
CN104461249B (en) | The arrangement display methods and device of graphical interfaces | |
CN107085578A (en) | A kind of page authoring method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |