CN109409364A - Image labeling method and device - Google Patents
Image labeling method and device Download PDFInfo
- Publication number
- CN109409364A CN109409364A CN201811204069.0A CN201811204069A CN109409364A CN 109409364 A CN109409364 A CN 109409364A CN 201811204069 A CN201811204069 A CN 201811204069A CN 109409364 A CN109409364 A CN 109409364A
- Authority
- CN
- China
- Prior art keywords
- image
- marked
- reference picture
- detected
- markup information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses image labeling method and device.One specific embodiment of this method includes: the markup information for obtaining the reference picture of the image to be marked in video, and the mark of reference picture includes: the markup information of the object being detected in reference picture;The feature of the object being detected in feature and the reference picture based on the object being detected in image to be marked, find out the target object in reference picture, and labeling operation is executed to image to be marked, labeling operation includes: using the markup information of target object as the markup information of object corresponding with target object in the image to be marked, wherein, target object is the object that the same object is indicated with an object in image to be marked.It realizes during being labeled to the image in video, the automatic markup information using the image marked is labeled the image not marked, saves the expense of annotation process.
Description
Technical field
This application involves computer fields, and in particular to field of neural networks more particularly to image labeling method and device.
Background technique
Such as needing the image through marking of magnanimity in target following, the training of the neural network of image recognition.
Image in video is labeled to obtain one of the main acquisition modes of the image through marking that the image through marking is magnanimity.
Currently, the notation methods generallyd use are to be labeled one by one to each of video image manually, expense is larger.
Summary of the invention
The embodiment of the present application provides image labeling method and device.
In a first aspect, the embodiment of the present application provides image labeling method, this method comprises: obtaining to be marked in video
Image reference picture markup information, the mark of reference picture includes: the mark of the object being detected in reference picture
Infuse information;Pair being detected in feature and the reference picture based on the object being detected in image to be marked
The feature of elephant finds out the target object in reference picture, and executes labeling operation, labeling operation packet to image to be marked
It includes: using the markup information of target object as the markup information of object corresponding with target object in the image to be marked,
Wherein, target object is the object that the same object is indicated with an object in image to be marked.
Second aspect, the embodiment of the present application provide image labeling device, which includes: acquiring unit, are configured as
The markup information of the reference picture of the image to be marked in video is obtained, the mark of reference picture includes: in reference picture
The markup information for the object being detected;Unit is marked, is configured as based on the object being detected in image to be marked
Feature and the reference picture in the object being detected feature, find out the target object in reference picture, and
Labeling operation is executed to image to be marked, labeling operation includes: using the markup information of target object as described to be marked
The markup information of object corresponding with target object in image, wherein target object is right with one in image to be marked
Object as indicating the same object.
Image labeling method and device provided by the embodiments of the present application, by the ginseng for obtaining the image to be marked in video
The markup information of image is examined, the mark of reference picture includes: the markup information of the object being detected in reference picture;It is based on
The feature of the object being detected in the feature and the reference picture of the object being detected in image to be marked, is looked into
The target object in reference picture is found out, and labeling operation is executed to image to be marked, labeling operation includes: by target pair
Markup information of the markup information of elephant as object corresponding with target object in the image to be marked, wherein target pair
As the object to indicate the same object with an object in image to be marked.It realizes and is carried out to the image in video
During mark, the automatic markup information using the image marked is labeled the image not marked, saves mark
The expense of note process.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 shows the exemplary system architecture for being suitable for being used to realize the embodiment of the present application;
Fig. 2 shows the flow charts according to one embodiment of the image labeling method of the application;
Fig. 3 shows the structural schematic diagram of one embodiment of the image labeling device according to the application;
Fig. 4 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system architecture for being suitable for being used to realize the embodiment of the present application.
As shown in Figure 1, system architecture may include terminal 101, network 102, electronic equipment 103.Network 102 may include
Various connection types, such as wired, wireless transmission link or fiber optic cables etc..
Terminal 101 transmits data by network 102 and electronic equipment 103, and terminal 101 can include but is not limited to intelligent hand
Machine, tablet computer, pocket computer on knee, desktop computer.
The user of terminal 101 can be mark personnel.Electronic equipment 103 can use the image marked in video
Markup information image to be marked is labeled, will mark result be sent to terminal 101, be presented to use in terminal 101
Family, user can further be marked.
Referring to FIG. 2, it illustrates the processes according to one embodiment of the image labeling method of the application.This method packet
Include following steps:
Step 201, the markup information of the reference picture of the image to be marked in video is obtained.
In the present embodiment, for the image in video as unit of frame, the frame image in video can be referred to as a figure
Picture.When video playing, each of video image is successively presented to the user according to order.Each image then corresponds to one
In current moment.Correspondingly, each image has an order according to corresponding successive in current moment.Image to be marked
Reference picture it is corresponding in current moment earlier than reference picture to be marked.
In the present embodiment, the quilt that can be equivalent to in image to be marked is labeled to an image to be marked
The object detected is labeled, and obtains the markup information for the object that each of image to be marked is detected.
When being labeled to an image to be marked, the mark of the reference picture of available image to be marked is believed
Breath.The markup information of reference picture includes: the markup information of multiple objects being detected in reference picture.
In the present embodiment, multiple images to be marked can be selected in advance from video as all to be marked
Image.Can according to the corresponding presentation time, it is earliest to the corresponding presentation time in multiple images to be marked first to
The image of mark is labeled, and according to the order of corresponding presentation time from morning to night, is labeled one by one.In addition to needing to be marked
Except the first image to be marked being marked in the image of note, the image to be marked for one, the image to be marked
Reference picture can be the newest image being marked before the reference picture to be marked to this is labeled.To in addition to all
When each image to be marked except the first image to be marked being marked in image to be marked is labeled,
The markup information that can use the reference picture of image to be marked is labeled image to be marked.
In other words, it when being labeled each time to an image to be marked, can use in current figure to be marked
The markup information of the newest image being marked is labeled current image to be marked as before.
In the present embodiment, it when the image to be marked to each is labeled, can use for target detection
Neural network detection image in object.Convolutional neural networks for target detection are surrounded each in image using detection block
A object.Neural network for target detection can be convolutional neural networks.Input an image into the volume for target detection
After product neural network, the testing result of available convolutional neural networks output, testing result includes: what each was detected
The identification information of object, identification information include: object indicate the title of object, the position of detection block, detection block size.
In the present embodiment, to the to be marked figure earliest from the corresponding presentation time in multiple images to be marked
As that time earliest image to be marked can be presented first with the neural network detection for target detection when being labeled
In the object being detected, obtain the identification information for the object that each is detected.Personnel can be for example marked to user is in
The identification information of each existing object detected.When any one inaccuracy in identification information, mark personnel can be into
Row adjustment operation, position to the accurate position of adjustable detection block, the size for adjusting detection block are accurate size, work as knowledge
The correct name of the object of object expression is revised as when the Name Error for the object that the object being detected in other information indicates
Claim.It is then possible to generate the markup information of the image to be marked of corresponding presentation time earliest, the corresponding presentation time is earliest
The markup information of image to be marked include: that earliest each of image to be marked of corresponding presentation time is detected
The title for the object that the accurate position of the detection block of object out, the accurate size of detection block, accurate object indicate.
Step 202, the feature based on the object being detected in image and reference picture to be marked is searched with reference to figure
Target object as in, and labeling operation is executed to image to be marked.
In the present embodiment, image to be marked is labeled by executing labeling operation to image to be marked.It can
With by object in the picture indicate to be referred to as object, in other words, object in the picture can be referred to as object.In image
It can be used as the size of object by the size of the detection block for the object that the neural network by target detection detects, in image
It can be used as the position of object by the position for the object box that the neural network by target detection detects.The position of detection block can
Think the position of the central point of detection block.
In the present embodiment, it is marked in the image to be marked to this to the reference picture using an image to be marked
It, can be in the reference picture of feature and image to be marked based on the object being detected in image to be marked when note
The feature for the object being detected finds out the target object in the reference picture of image to be marked.Target object be with to
An object in the image of mark indicates the object of the same object.
When an object in the reference picture of image to be marked and an object in image to be marked indicate
When the same object, then an object in the reference picture of the image to be marked is target object, the image to be marked
In an object be the corresponding object of the target object.
In the present embodiment, when in feature and reference picture based on the object being detected in image to be marked
The feature for the object being detected can be each by being detected in image to be marked when finding out target object
The size of each object being detected in the reference picture of the size of object, position and the image to be marked, position
Similarity finds out the target object in the reference picture of image to be marked, meanwhile, find out in image to be marked with
The corresponding object of target object.
For example, the size of an object being detected in the reference picture of the image to be marked is to be marked with this
The size of an object being detected in image, and one in the reference picture of the image to be marked is detected
The position of object and the distance between the position for object being detected in the image to be marked be less than apart from threshold
It is worth, then an object being detected in the reference picture of the image to be marked and a quilt in the image to be marked
The object that detects indicates same object, and an object in the reference picture of the image to be marked is target object, should be to
An object in the image of mark is the corresponding object of the target object.
It in the present embodiment, can after finding out each of the reference picture of image to be marked target object
Respectively by the markup information of each target object as the mark of object corresponding with target object in image to be marked
Information.
Target in some optional implementations of the present embodiment, in the reference picture for searching image to be marked
When object, it can be detected in detection block and reference picture based on the object being detected in image to be marked
The friendship of the detection block of object and ratio, find out the target object in reference picture.When in the reference picture of the image to be marked
An object being detected detection block and an object being detected in the image to be marked detection block
It hands over and compares greater than handing over and when than threshold value, then an object being detected in the reference picture of the image to be marked and should be to
One in the image of the mark object that is detected indicates same object, one in the reference picture of the image to be marked
Object is target object, and an object in the image to be marked is the corresponding object of the target object.
In some optional implementations of the present embodiment, all images in video can be used as figure to be marked
Picture can mark one by one since the first image in video.The reference picture of one image to be marked is in video should
A upper image for image to be marked.When being labeled to an image to be marked, upper the one of the image can use
The markup information of a image is labeled the image.
For example, being labeled first to the 1st image, obtain the markup information of the 1st image, to the 2nd image into
When rower is infused, reference picture of the 1st image as the 2nd image, according to the inspection of the object being detected in the 1st image
The friendship of the detection block of frame and the object being detected in the 2nd image and ratio are surveyed, determines the target object in the 1st image,
It is indicated using the markup information of the target object in the 1st image as in the 2nd image with the target object in the 1st image
The markup information of the object of the same object, thus, the object being detected in the 2nd image is labeled.It can adopt
In the same way using the 2nd image as the reference picture of the 3rd image, using the markup information of the 2nd image to the 3rd
Image is labeled, and so on, until being labeled to all images.
When being labeled to the first image in video, can be detected first with the neural network for target detection
The object being detected in the object in first image in video, obtains the identification information for the object that each is detected.
The identification information that the object that each is detected is presented in personnel can be for example marked to user.When any one in identification information
When inaccurate, mark personnel can be adjusted operation, position to the accurate position of adjustable detection block, adjustment detection block
Size be accurate size, be revised as when the Name Error for the object that the object of being detected in identification information indicates pair
As the correct title of the object of expression.It is then possible to the markup information of the first image in video is generated, it is first in video
The markup information of image includes: the accurate position of the detection block for the object that each in first image is detected, detection block
Accurate size, accurate object indicate object title.
In some optional implementations of the present embodiment, by executing mark behaviour to an image to be marked
Make, it, can be by user for example when determining that the partial objects in the image do not have markup information after being labeled to the image
Label personnel are labeled operation, the markup information of generating portion object to partial objects.In other words, it is marked to the image
When note, for the partial objects not being marked, partial objects can be labeled by user, obtain the mark letter of partial objects
Breath.
Referring to FIG. 3, this application provides a kind of image labeling devices as the realization to method shown in above-mentioned each figure
One embodiment, the Installation practice are corresponding with embodiment of the method shown in Fig. 2.Each unit and unit in device are matched
The specific implementation for being set to the corresponding operation of completion can be with reference to the specific of corresponding operation described in embodiment of the method
Implementation.
As shown in figure 3, the image labeling device of the present embodiment includes: acquiring unit 301, unit 302 is marked.Wherein, it obtains
Unit 301 is taken to be configured as obtaining the markup information of the reference picture of the image to be marked in video, the mark of reference picture
It include: the markup information of the object being detected in reference picture;Mark unit 302 is configured as based on image to be marked
In the object being detected feature and the reference picture in the object being detected feature, find out reference picture
In target object, and labeling operation is executed to image to be marked, labeling operation includes: by the markup information of target object
Markup information as object corresponding with target object in the image to be marked, wherein target object be with it is to be marked
Image in an object indicate the object of the same object.
In some optional implementations of the present embodiment, mark unit is configured to based on figure to be marked
As in the object being detected detection block and the reference picture in the object being detected detection block friendship and ratio,
The target object in reference picture is found out, detection block is the encirclement pair exported for the neural network of the object in detection image
At least part of framework object of elephant.
In some optional implementations of the present embodiment, the reference picture of image to be marked is described in video
A upper image for image to be marked.
In some optional implementations of the present embodiment, image labeling device further include: the first auxiliary mark unit,
It is configured as utilizing the object in the first image in the neural network detection video for target detection, obtains testing result,
Testing result includes: the identification information of each object detected;Testing result is presented to user;Mark behaviour based on user
Make, generates the markup information of each object being detected in first image.
In some optional implementations of the present embodiment, image labeling device further include: the second auxiliary mark unit,
It is configured to respond to determine that the partial objects in the image after being performed labeling operation do not have markup information;Based on user's
To the labeling operation of partial objects, the markup information of generating portion object.
Fig. 4 shows the structural schematic diagram for being suitable for the computer system for the electronic equipment for being used to realize the embodiment of the present application.
It, can be according to being stored in read-only storage as shown in figure 4, computer system includes central processing unit (CPU) 401
Program in device (ROM) 402 is executed from the program that storage section 408 is loaded into random access storage device (RAM) 403
Various movements appropriate and processing.In RAM403, it is also stored with various programs and data needed for computer system operation.
CPU 401, ROM 402 and RAM403 are connected with each other by bus 404.Input/output (I/O) interface 405 is also connected to always
Line 404.
I/O interface 405: importation 406 is connected to lower component;Output par, c 407;Storage section including hard disk etc.
408;And the communications portion 409 of the network interface card including LAN card, modem etc..Communications portion 409 is via all
As the network of internet executes communication process.Driver 410 is also connected to I/O interface 405 as needed.Detachable media 411,
Such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 410, in order to from thereon
The computer program of reading is mounted into storage section 408 as needed.
Particularly, process described in embodiments herein may be implemented as computer program.For example, the application
Embodiment includes a kind of computer program product comprising carries computer program on a computer-readable medium, the calculating
Machine program includes the instruction for method shown in execution flow chart.The computer program can be by communications portion 409 from net
It is downloaded and installed on network, and/or is mounted from detachable media 411.In the computer program by central processing unit (CPU)
When 401 execution, the above-mentioned function of limiting in the present processes is executed.
Present invention also provides a kind of electronic equipment, which can be configured with one or more processors;Storage
Device may include in one or more programs to execute described in above-described embodiment for storing one or more programs
The instruction of operation.When one or more programs are executed by one or more processors, so that one or more processors execute
The instruction of operation described in above-described embodiment.
Present invention also provides a kind of computer-readable medium, which, which can be in electronic equipment, is wrapped
It includes;It is also possible to individualism, without in supplying electronic equipment.Above-mentioned computer-readable medium carries one or more
Program, when one or more program is executed by electronic equipment, so that electronic equipment executes behaviour described in above-described embodiment
The instruction of work.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example may include but unlimited
In the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or device, or any above combination.Computer can
The more specific example for reading storage medium can include but is not limited to: electrical connection, portable meter with one or more conducting wires
Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, computer readable storage medium can be it is any include or storage program
Tangible medium, which can be executed system by message, device or device use or in connection.And in this Shen
Please in, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable
Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by
Message executes system, device or device use or program in connection.The journey for including on computer-readable medium
Sequence code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable message of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer message
Combination realize.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from the inventive concept, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of image labeling method, comprising:
The markup information of the reference picture of the image to be marked in video is obtained, the mark of reference picture includes: reference picture
In the object being detected markup information;
The object being detected in feature and the reference picture based on the object being detected in image to be marked
Feature, find out the target object in reference picture, and labeling operation, labeling operation packet are executed to image to be marked
It includes: using the markup information of target object as the markup information of object corresponding with target object in the image to be marked,
Wherein, target object is the object that the same object is indicated with an object in image to be marked.
2. according to the method described in claim 1, feature based on the object being detected in image to be marked and described
The feature of the object being detected in reference picture, the target object found out in reference picture include:
Pair being detected in detection block and the reference picture based on the object being detected in image to be marked
The friendship of the detection block of elephant and ratio, find out the target object in reference picture, and detection block is for the object in detection image
At least part of framework object of the encirclement object of neural network output.
3. according to the method described in claim 2, the reference picture of image to be marked is the figure to be marked in video
A upper image for picture.
4. according to the method described in claim 3, the method also includes:
Using the object in the first image in the neural network detection video for target detection, testing result is obtained, is detected
Result includes: the identification information of each object detected;
Testing result is presented to user;
Labeling operation based on user generates the markup information of each object being detected in first image.
5. according to the method described in claim 4, the method also includes:
In response to determining that the partial objects in the image after being performed labeling operation do not have markup information;
The labeling operation to partial objects based on user, the markup information of generating portion object.
6. a kind of image labeling device, comprising:
Acquiring unit is configured as obtaining the markup information of the reference picture of the image to be marked in video, reference picture
Mark includes: the markup information of the object being detected in reference picture;
Unit is marked, is configured as in the feature and the reference picture based on the object being detected in image to be marked
The object being detected feature, find out the target object in reference picture, and mark is executed to image to be marked
Operation, labeling operation includes: using the markup information of target object as corresponding with target object in the image to be marked
The markup information of object, wherein target object is the object that the same object is indicated with an object in image to be marked.
7. device according to claim 6, mark unit is configured to based on tested in image to be marked
The friendship of the detection block of the object being detected in the detection block for the object measured and the reference picture and ratio, find out reference
Target object in image, detection block are at least portion of the encirclement object exported for the neural network of the object in detection image
The framework object divided.
8. device according to claim 7, the reference picture of image to be marked is the figure to be marked in video
A upper image for picture.
9. device according to claim 8, described device further include:
First auxiliary mark unit is configured as utilizing in the first image in the neural network detection video for target detection
Object, obtain testing result, testing result includes: the identification information of each object detected;It presents and detects to user
As a result;Labeling operation based on user generates the markup information of each object being detected in first image.
10. device according to claim 9, described device further include:
Second auxiliary mark unit, is configured to respond to determine that the partial objects in the image after being performed labeling operation do not have
There is markup information;The labeling operation to partial objects based on user, the markup information of generating portion object.
11. a kind of electronic equipment, comprising:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors
Realize such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that the program is executed by processor
Shi Shixian method for example as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811204069.0A CN109409364A (en) | 2018-10-16 | 2018-10-16 | Image labeling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811204069.0A CN109409364A (en) | 2018-10-16 | 2018-10-16 | Image labeling method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109409364A true CN109409364A (en) | 2019-03-01 |
Family
ID=65468171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811204069.0A Pending CN109409364A (en) | 2018-10-16 | 2018-10-16 | Image labeling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109409364A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084895A (en) * | 2019-04-30 | 2019-08-02 | 上海禾赛光电科技有限公司 | The method and apparatus that point cloud data is labeled |
CN110264515A (en) * | 2019-05-07 | 2019-09-20 | 联想(上海)信息技术有限公司 | A kind of mask method and electronic equipment |
CN110458226A (en) * | 2019-08-08 | 2019-11-15 | 上海商汤智能科技有限公司 | Image labeling method and device, electronic equipment and storage medium |
CN111598006A (en) * | 2020-05-18 | 2020-08-28 | 北京百度网讯科技有限公司 | Method and device for labeling objects |
CN111814885A (en) * | 2020-07-10 | 2020-10-23 | 云从科技集团股份有限公司 | Method, system, device and medium for managing image frames |
CN111882582A (en) * | 2020-07-24 | 2020-11-03 | 广州云从博衍智能科技有限公司 | Image tracking correlation method, system, device and medium |
CN113343857A (en) * | 2021-06-09 | 2021-09-03 | 浙江大华技术股份有限公司 | Labeling method, labeling device, storage medium and electronic device |
CN113378958A (en) * | 2021-06-24 | 2021-09-10 | 北京百度网讯科技有限公司 | Automatic labeling method, device, equipment, storage medium and computer program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778585A (en) * | 2016-12-08 | 2017-05-31 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN107644204A (en) * | 2017-09-12 | 2018-01-30 | 南京凌深信息科技有限公司 | A kind of human bioequivalence and tracking for safety-protection system |
CN108399362A (en) * | 2018-01-24 | 2018-08-14 | 中山大学 | A kind of rapid pedestrian detection method and device |
CN108416799A (en) * | 2018-03-06 | 2018-08-17 | 北京市商汤科技开发有限公司 | Method for tracking target and device, electronic equipment, program, storage medium |
-
2018
- 2018-10-16 CN CN201811204069.0A patent/CN109409364A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778585A (en) * | 2016-12-08 | 2017-05-31 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN107644204A (en) * | 2017-09-12 | 2018-01-30 | 南京凌深信息科技有限公司 | A kind of human bioequivalence and tracking for safety-protection system |
CN108399362A (en) * | 2018-01-24 | 2018-08-14 | 中山大学 | A kind of rapid pedestrian detection method and device |
CN108416799A (en) * | 2018-03-06 | 2018-08-17 | 北京市商汤科技开发有限公司 | Method for tracking target and device, electronic equipment, program, storage medium |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084895A (en) * | 2019-04-30 | 2019-08-02 | 上海禾赛光电科技有限公司 | The method and apparatus that point cloud data is labeled |
CN110084895B (en) * | 2019-04-30 | 2023-08-22 | 上海禾赛科技有限公司 | Method and equipment for marking point cloud data |
CN110264515A (en) * | 2019-05-07 | 2019-09-20 | 联想(上海)信息技术有限公司 | A kind of mask method and electronic equipment |
CN110264515B (en) * | 2019-05-07 | 2023-08-18 | 联想(上海)信息技术有限公司 | Labeling method and electronic equipment |
CN110458226A (en) * | 2019-08-08 | 2019-11-15 | 上海商汤智能科技有限公司 | Image labeling method and device, electronic equipment and storage medium |
CN111598006A (en) * | 2020-05-18 | 2020-08-28 | 北京百度网讯科技有限公司 | Method and device for labeling objects |
CN111598006B (en) * | 2020-05-18 | 2023-05-26 | 阿波罗智联(北京)科技有限公司 | Method and device for labeling objects |
CN111814885A (en) * | 2020-07-10 | 2020-10-23 | 云从科技集团股份有限公司 | Method, system, device and medium for managing image frames |
CN111882582A (en) * | 2020-07-24 | 2020-11-03 | 广州云从博衍智能科技有限公司 | Image tracking correlation method, system, device and medium |
CN111882582B (en) * | 2020-07-24 | 2021-10-08 | 广州云从博衍智能科技有限公司 | Image tracking correlation method, system, device and medium |
CN113343857A (en) * | 2021-06-09 | 2021-09-03 | 浙江大华技术股份有限公司 | Labeling method, labeling device, storage medium and electronic device |
CN113378958A (en) * | 2021-06-24 | 2021-09-10 | 北京百度网讯科技有限公司 | Automatic labeling method, device, equipment, storage medium and computer program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109409364A (en) | Image labeling method and device | |
CN112966712B (en) | Language model training method and device, electronic equipment and computer readable medium | |
CN108171207A (en) | Face identification method and device based on video sequence | |
CN109325541A (en) | Method and apparatus for training pattern | |
CN110046600A (en) | Method and apparatus for human testing | |
CN108985208A (en) | The method and apparatus for generating image detection model | |
CN109934242A (en) | Image identification method and device | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108830235A (en) | Method and apparatus for generating information | |
WO2020062493A1 (en) | Image processing method and apparatus | |
CN109359676A (en) | Method and apparatus for generating vehicle damage information | |
CN109086719A (en) | Method and apparatus for output data | |
US20210264198A1 (en) | Positioning method and apparatus | |
CN109657251A (en) | Method and apparatus for translating sentence | |
CN109063653A (en) | Image processing method and device | |
CN109086780A (en) | Method and apparatus for detecting electrode piece burr | |
CN109255767A (en) | Image processing method and device | |
CN108509921A (en) | Method and apparatus for generating information | |
CN108510084A (en) | Method and apparatus for generating information | |
CN110070076A (en) | Method and apparatus for choosing trained sample | |
CN109543068A (en) | Method and apparatus for generating the comment information of video | |
CN109086828A (en) | Method and apparatus for detecting battery pole piece | |
CN115631212A (en) | Person accompanying track determining method and device, electronic equipment and readable medium | |
CN109064464A (en) | Method and apparatus for detecting battery pole piece burr | |
CN110084298A (en) | Method and device for detection image similarity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190301 |
|
RJ01 | Rejection of invention patent application after publication |