CN107590450A - A kind of labeling method of moving target, device and unmanned plane - Google Patents
A kind of labeling method of moving target, device and unmanned plane Download PDFInfo
- Publication number
- CN107590450A CN107590450A CN201710779916.5A CN201710779916A CN107590450A CN 107590450 A CN107590450 A CN 107590450A CN 201710779916 A CN201710779916 A CN 201710779916A CN 107590450 A CN107590450 A CN 107590450A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- marked
- target area
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
Abstract
The invention discloses a kind of labeling method of moving target, device and unmanned plane.This method includes:Obtain the first image of camera collection;The identification of the target to be marked sent in response to terminal device is instructed, and one or more motion target areas are identified from the first image;The external figure of each motion target area is obtained, according to the contact coordinate of target to be marked in the external figure of each motion target area and identification instruction, it is determined that motion target area corresponding with target to be marked;By the external pictorial symbolization for the motion target area corresponding with target to be marked determined in the first image.Using the external figure of motion target area as mark, it can not only ensure that target to be marked is completely in marked region, target more to be marked can be marked, and the background information in marked region can also be reduced, the interference of background information in marked region is avoided, improves the accuracy rate of mark.
Description
Technical field
The present invention relates to tracking technique field, the more particularly to a kind of labeling method of moving target, device and unmanned plane.
Background technology
Unmanned plane (Unmanned Aerial Vehicle, UAV) tracking ground moving object has most important theories research meaning
Justice and application value, it is an important research direction in UAS autonomous control field.Ground is independently tracked in unmanned plane
, it is necessary to which the region of target to be tracked is marked before moving target, to carry out clarification of objective extraction to be tracked, and then
Clarification of objective to be tracked according to extracting is tracked.Therefore, to the accuracy of the mark of target to be tracked, directly affect
To the degree of accuracy of the tracking to target to be tracked.But in the prior art, when the region of target to be tracked is marked,
Due to the inaccuracy of the marked region of moving target, can cause to be mixed with a large amount of background letters in the marked region of target to be tracked
Breath, easily increase the interference information of target to be tracked.
The content of the invention
In view of the above problems, it is proposed that labeling method, device and the unmanned plane of a kind of moving target of the invention, to solve
Certainly or at least in part solve the above problems.
According to an aspect of the invention, there is provided a kind of labeling method of moving target, methods described include:
Obtain the first image of camera collection;
In response to terminal device send target to be marked identification instruct, identified from described first image one or
Multiple motion target areas;The identification instruction includes the contact coordinate of the target to be marked;
The external figure of each motion target area is obtained, according to the external of each motion target area
The contact coordinate of figure and the target to be marked, it is determined that motion target area corresponding with the target to be marked;
The external pictorial symbolization of motion target area corresponding with the target to be marked will be determined described first
In image.
Preferably, methods described also includes:
Instructed in response to the cancellation current markers that the terminal device is sent, remove the mark in described first image;
And/or
The mark for re-flagging instruction, removing in described first image sent in response to the terminal device, and again
Perform and obtain first image of camera collection to motion target area corresponding with the target to be marked will be determined
Step of the external pictorial symbolization in described first image;It is described re-flag instruction include needs re-flag it is to be marked
The contact coordinate of target.
According to another aspect of the present invention, there is provided a kind of labelling apparatus of moving target, described device include:
Image acquisition unit, for obtaining the first image of camera collection;
Recognition unit, for the identification instruction of the target to be marked sent in response to terminal device, from described first image
In identify one or more motion target areas;The identification instruction includes the contact coordinate of the target to be marked;
Determining unit, for obtaining the external figure of each motion target area, according to each motion
The contact coordinate of the external figure of target area and the target to be marked, it is determined that corresponding with the target to be marked move mesh
Mark region;
Indexing unit, for the external figure mark of motion target area corresponding with the target to be marked will to be determined
Note is in described first image.
According to a further aspect of the invention, there is provided a kind of unmanned plane, the unmanned plane include camera, wireless connection
The labelling apparatus of unit and foregoing moving target;
The camera, for gathering the first image, and by described first image;
The wireless connection unit, the mark instructions sent for receiving terminal apparatus, the mark instructions are sent to
The labelling apparatus of the moving target;The mark instructions include the contact coordinate of the target to be marked;
The labelling apparatus of the moving target, the identification of the target to be marked for being sent in response to terminal device instruct,
One or more motion target areas are identified from described first image;Obtain the outer map interlinking of each motion target area
Shape, according to each external figure of the motion target area and the contact coordinate of the target to be marked, it is determined that and institute
State motion target area corresponding to target to be marked;Motion target area corresponding with the target to be marked will be determined
External pictorial symbolization is in described first image.
In summary, the beneficial effect of technical scheme is:
On the one hand, the automatic mark to the region of target to be marked can be realized using technical scheme, i.e., by
System is automatically performed mark, and target to be marked is selected without the manual frame of user, and user only needs to click on target to be marked, i.e., only needs
The contact coordinate of user is obtained, it is simple, convenient;On the other hand, using the external figure of motion target area as mark
Note, can not only ensure that target to be marked is completely in marked region, more accurately can enter rower to target to be marked
Note, and the background information in marked region can also be reduced, the interference of background information in marked region is avoided, improves mark
Accuracy rate.
In order to realize the tracking of target to be marked, it is necessary to feature extraction be carried out to marked region, then in the process of tracking
In, by way of the characteristics of image in each image that camera gathers is matched with the feature extracted, it is determined that treating
The moving target of tracking.In this programme, when carrying out clarification of objective extraction to be marked, simply extract in the external figure
Feature, because the background information in the external figure is less, reduce the interference of background information, ensure extraction feature it is accurate
Property, and then the accuracy of tracking is improved, and in the then tracking to the target to be marked, object time to be marked can be ensured
In the external figure.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of the labeling method for moving target that one embodiment of the invention provides;
Fig. 2 (a) is a kind of schematic diagram for first image that one embodiment of the invention provides;
Fig. 2 (b) is a kind of schematic diagram for bianry image that one embodiment of the invention provides;
Fig. 2 (c) is the schematic diagram of the bianry image after a kind of closed operation processing that one embodiment of the invention provides;
The schematic diagram of bianry image after a kind of expansion process that Fig. 2 (d) provides for one embodiment of the invention;
Fig. 2 (e) is the schematic diagram of the bianry image after a kind of connection processing that one embodiment of the invention provides;
Fig. 2 (f) is the signal of multiple motion target areas in a kind of first image that one embodiment of the invention provides
Figure;
Fig. 2 (g) is a kind of mark result schematic diagram for motion target area to be marked that one embodiment of the invention provides;
Fig. 3 is a kind of illustrative view of functional configuration of the labelling apparatus for moving target that one embodiment of the invention provides;
Fig. 4 is a kind of illustrative view of functional configuration of the labelling apparatus for moving target that another embodiment of the present invention provides;
Fig. 5 is a kind of illustrative view of functional configuration for unmanned plane that one embodiment of the invention provides.
Embodiment
At one to being entered by way of carrying out manual frame choosing to target to be marked
OK, i.e., mesh target area to be marked directly delimited, to the region by carrying out dragging frame choosing in the display interface of terminal device
Demarcated.But the marked region selected of dragging frame, or it is not all right agree with target to be marked completely, mark too big, mark zone
Substantial amounts of background information can be mixed with domain, for example, target to be marked there was only 80%, 20% is background information;Or mark zone
Domain can not accommodate target to be marked completely, for example, only containing the 70% of target to be marked, mark inaccuracy causes to be marked
Clarification of objective extraction is inaccurate, and disadvantages mentioned above can all influence the accuracy of mark.
The present invention clicks on the contact information of target to be marked, and one in the first image identified by obtaining user
Individual or multiple moving targets, it is automatic to obtain motion target area corresponding with target to be marked, it will determine and mesh to be marked
The zone marker that the boundary rectangle of motion target area corresponding to mark surrounds realizes the area to target to be marked in the first image
Domain accurate marker.To make the object, technical solutions and advantages of the present invention clearer, the present invention is implemented below in conjunction with accompanying drawing
Mode is described in further detail.
Fig. 1 is a kind of schematic flow sheet of the labeling method for moving target that one embodiment of the invention provides.Such as Fig. 1 institutes
Show, this method includes:
Step S110, obtain the first image of camera collection.
In the present embodiment, the first image here can be the image that camera gathers in real time, that is to say, that in user
Select after target to be marked, it is necessary to select image corresponding to that moment of target to be marked to handle user, i.e., here
The first image can be camera collection current frame image, current frame image be exactly user select tracking target when it is corresponding
That moment image
Step S120, the identification of the target to be marked sent in response to terminal device are instructed, identified from the first image
One or more motion target areas;Identification instruction includes the contact coordinate of target to be marked.
Terminal device is sent the image that camera gathers by wireless telecommunications by being wirelessly connected to unmanned plane, unmanned plane
To terminal device, information that terminal device can be in the video scene of real-time display unmanned plane camera collection.When user want pair
When some operational objective in the first image that unmanned plane is sent is marked, it can be inputted in terminal device to be marked to this
Target identification instruction, for example, first trigger track button after, in order to accurately be tracked, according to the prompting of terminal device, it is necessary to
Target to be tracked is selected, collects target to be marked, screen is clicked on by touch screen or cursor in the screen of equipment specifically in terminal
The hardware operations such as the target to be marked in curtain.After terminal device receives identification instruction, the hardware operation can be processed into phase
The identification instruction answered, including identifying contact coordinate of the user to target to be marked from hardware operation.In the present embodiment
Contact coordinate when being exactly that user clicks on screen, position when finger and screen contact point or cursor are clicked on is based on the first image
Coordinate.
After the identification instruction for the target to be marked for receiving terminal device transmission, one can be identified from the first image
Or multiple motion target areas, one in the motion target area identified is exactly moving target corresponding with target to be marked
Region.
Step S130, the external figure of each motion target area is obtained, according to the outer map interlinking of each motion target area
The contact coordinate of shape and target to be marked, it is determined that motion target area corresponding with target to be marked.
In order to determine motion target area corresponding with target to be marked, it is necessary to obtain the external of each motion target area
Figure, for example, boundary rectangle, external circle etc., are not specifically limited herein.Because the contact coordinate of target to be marked is
Know, the distance of external figure and contact coordinate by calculating each motion target area, it is possible to it is determined that with mesh to be marked
Motion target area corresponding to mark.
Step S140, by the external pictorial symbolization for the motion target area corresponding with target to be marked determined first
In image.
It is that will determine moving target corresponding with target to be marked in order to which the region of target to be marked is marked
The zone marker that the external figure in region surrounds is in the first image.For example, external figure is boundary rectangle, then just that this is outer
Rectangle is connect to render in the first image as the indicia framing of target to be marked.
External figure is based on the maximum abscissa in given each summit of two-dimensional shapes, minimum abscissa, maximum vertical seat
Mark, minimum ordinate determine the figure on border.External figure in the present embodiment is the maximum horizontal seat according to motion target area
Mark, minimum abscissa, maximum ordinate, the figure of minimum ordinate deckle circle.Therefore, the external figure can ensure to include whole
Individual motion target area, and moving target agree with completely, and background information is minimum.
To sum up, on the one hand, the automatic mark to the region of target to be marked can be realized using technical scheme,
Mark is automatically performed by system, target to be marked is selected without the manual frame of user, and user only needs to click on target to be marked, i.e.,
The contact coordinate of user need to be only obtained, it is simple, convenient;On the other hand, corresponding with target to be marked mesh will be moved
The external figure in region is marked as marking, can not only ensure that target to be marked is completely in marked region, can be more accurate
True target to be marked is marked, and can also reduce the background information in marked region, avoids in marked region and carries on the back
The interference of scape information, improve the accuracy rate of mark.
It is the tracking to the target to be marked to the purpose of the mark of target to be marked.In subsequent target to be marked
In tracking, in order to realize the tracking of target to be marked, it is necessary to feature extraction be carried out to marked region, then in the process of tracking
In, by way of the characteristics of image in each image that camera gathers is matched with the feature extracted, it is determined that treating
The moving target of tracking.In this programme, when carrying out clarification of objective extraction to be marked, simply extract in the external figure
Feature, because the background information in the external figure is less, reduce the interference of background information, ensure extraction feature it is accurate
Property, and then the accuracy of tracking is improved, and in the then tracking to the target to be marked, object time to be marked can be ensured
In the external figure.
In one embodiment of the invention, when only recognizing a motion target area from the first image, on the spot
When there was only a moving target in scape, it is mesh target area to be marked that the moving target, which is defaulted as, and user is according to terminal device
Prompting in display interface, track button is first triggered, and after selecting target to be marked, terminal device will be sent to unmanned plane to be known
Do not instruct, then unmanned plane instructs in response to the identification, directly will fortune after the motion target area is identified from the first image
The zone marker that the boundary rectangle in moving-target region surrounds is in the first image, without performing again according to each motion target area
External figure and target to be marked contact coordinate, it is determined that the step of motion target area corresponding with target to be marked, section
Save system resource.
In another embodiment of the present invention, when only recognizing a motion target area from the first image, i.e.,
When there was only a moving target in scene, it is mesh target area to be marked that the moving target, which is defaulted as, and user sets according to terminal
Prompting in standby display interface, first triggers track button, and after selecting target to be marked, terminal device will be sent to unmanned plane
Identification instruction, then unmanned plane is in response to identification instruction, after the motion target area is identified from the first image, for essence
Determine whether the motion target area is corresponding with target to be marked, obtain the boundary rectangle of motion target area, judging should
Whether the distance of the centre coordinate of the boundary rectangle of motion target area and the contact coordinate of target to be marked is less than predetermined threshold value,
When being judged as YES, it is determined that motion target area corresponding with target to be marked;The boundary rectangle of the motion target area is surrounded
Zone marker in the first image.If being judged as NO, it can not be marked, mark failure, unmanned plane is sent out to terminal device
The information for sending mark to fail, so that terminal device prompts user.
It is existing that one or more motion target areas can use being identified from the first image in Fig. 1 step S120
Image method in technology, the motion target area in the first image is identified.One is given in the present invention preferably
Ground identification method, see as described below.
In one embodiment of the invention, one or more motion mesh are identified from the first image in step S120
Mark region includes:
Step S121, obtain multiple second images adjacent with the first image of camera collection.
Multiple second images adjacent with the first image, it is that camera gathers before the first image in the present embodiment
Multiple second images.For example, multiple second images are 3, the first image of acquisition is the nth frame image of camera collection, then
Multiple second images are exactly the N-1 two field pictures, N-2 two field pictures, N-3 two field pictures of camera collection.Here multiple are
Set according to demand, do not do specific restriction herein.
Step S122, according to multiple second images, obtain background image.
In the present embodiment, background image is the base image for realizing moving target mark, is obtained according to multiple second images
Background image is taken, is the accuracy in order to ensure background image, specifically, the present embodiment is obtained using the method for background difference
Take background image.
Step S123, by background image and the first image comparison, obtain foreground image.
In the present embodiment, the appearance of moving target and the mobile change that can all cause respective regions in image, and the first figure
Seem based on background image, the part in the first image different from background image is exactly the region of moving target, so, this
In foreground image obtained by background image and the first image comparison, foreground image may be considered the first image
In be different from the part of background image, specifically obtain foreground image using the method for difference, i.e., by the first image and
Background image carries out difference.
Step S124, foreground image is subjected to binary conversion treatment, obtains the bianry image of foreground image.
Step S125, bianry image is carried out to closed operation processing, expansion process, connection processing successively, determines the first image
One or more of motion target area.
Specifically, foreground image is subjected to binaryzation threshold () processing;But can not be complete after binary conversion treatment
Each motion target area is identified as one it is complete, but an original very complete moving target is identified as by piecemeal
Multiple targets.For this reason, it may be necessary to be post-processed to bianry image, the closed operation processing of bianry image is first carried out, i.e.,
morphologyEx();Dilation operation dilate () is carried out after the closed operation of bianry image;Then carry out again at expanding block
Reason, finally carries out Connected area disposal$.So it is obtained with complete motion target area.
In one embodiment of the invention, denoising also has been carried out to the image after expanding block processing, for example, Gauss
LPF.
In a specific example, the target to be marked that user specifies is the knapsack boy student in the first image, passes through end
The display interface of end equipment is clicked on and treats knapsack boy student.Fig. 2 (a) is a kind of showing for first image that one embodiment of the invention provides
It is intended to;Fig. 2 (b) is a kind of schematic diagram for bianry image that one embodiment of the invention provides;Fig. 2 (c) is a reality of the invention
The schematic diagram of bianry image after a kind of closed operation processing of example offer is provided;Fig. 2 (d) is one that one embodiment of the invention provides
The schematic diagram of bianry image after kind expansion process;Fig. 2 (e) is after a kind of connection that one embodiment of the invention provides is handled
The schematic diagram of bianry image;Fig. 2 (f) is multiple moving target areas in a kind of first image that one embodiment of the invention provides
The schematic diagram in domain;Fig. 2 (g) is a kind of mark result signal for motion target area to be marked that one embodiment of the invention provides
Figure.
As shown in Fig. 2 (a), there are four moving targets in first image.After obtaining foreground image, foreground image is carried out
Binary conversion treatment, obtain the bianry image as shown in Fig. 2 (b).After carrying out closed operation processing to bianry image, such as Fig. 2 (c) is obtained
Bianry image after shown closed operation processing;Bianry image after closed operation processing carries out expansion process again, obtains such as Fig. 2
(d) bianry image after expansion process shown in;Bianry image after expansion process carries out Connected area disposal$ again, obtains such as Fig. 2
(e) bianry image after connection processing shown in.
So, four motion target areas in the first image numbering 6,13,14,15 respectively will be obtained, in order to determine
Motion target area corresponding with target to be marked (motion target area 6) is, it is necessary to obtain the external of each motion target area
Figure, as shown in Fig. 2 (f), external figure herein is boundary rectangle.According to the boundary rectangle of each motion target area and
The contact coordinate of target (when user clicks on knapsack boy student) to be marked, it is determined that motion target area corresponding with target to be marked will
The external pictorial symbolization for the motion target area corresponding with target to be marked determined is in the first image, such as Fig. 2 (g) institutes
Show, that is, realize the mark in the region to target to be marked (knapsack boy student).
In one embodiment of the invention, above-mentioned external figure is boundary rectangle.In step S130 shown in Fig. 1
According to the external figure of each motion target area and the contact coordinate of target to be marked, it is determined that fortune corresponding with target to be marked
Moving-target region includes:
Step S131, the boundary rectangle of each motion target area is obtained, it is determined that the external square of each motion target area
The centre coordinate of shape.
Step S132, calculate the distance of the centre coordinate of each boundary rectangle and the contact coordinate of target to be marked.
Step S133, motion target area corresponding with the minimum value of distance is defined as fortune corresponding with target to be marked
Moving-target region.
In the present embodiment, the distance of the centre coordinate by calculating contact coordinate and each motion target area, distance is touched
The nearest motion target area of point coordinates, motion target area as corresponding with target to be marked, accurately can determine and treat
Mark motion target area corresponding to target.
Further, the centre coordinate bag of the boundary rectangle of the above-mentioned step S131 each motion target area of determination
Include:
Obtain the coordinate of the left upper apex of each boundary rectangle and the coordinate of bottom right vertex;According to the coordinate of left upper apex and
The coordinate of bottom right vertex, calculate the width and height of each boundary rectangle;According to the coordinate, width and height of left upper apex, meter
Calculate the centre coordinate of each boundary rectangle.
Illustrated by taking one of motion target area in the first image as an example.The upper left of its boundary rectangle obtained
The coordinate on summit is B1(x1, y1), the coordinate B of bottom right vertex2(x2, y2).Width=y of boundary rectangle2-y1, width w=x2-
x1.The motion target area of determination can be expressed as R={ x1, y1, w, h }, then, the boundary rectangle of the motion target area
Centre coordinate is O (x1+ w/2, y1-h/2)。
Assuming that the contact coordinate of target to be marked is A (x, y).So, the center of the boundary rectangle of the motion target area
The distance of coordinate and the contact coordinate of target to be marked is:
Each motion target area carries out the calculating of above-mentioned distance, obtains distance value minimum, i.e., apart from contact coordinate most
Near corresponding motion target area, motion target area as corresponding with target to be marked.
For example, in embodiment shown in Fig. 2, motion target area 6, motion target area 13, motion target area 14, fortune
The centre coordinate in moving-target region 15 and the distance of contact coordinate are respectively dist1, dist2, dist3, dist4, determine size
Relation is:dist1<dist3<dist2<Dist4, then determine that motion target area 6 is and target to be marked (knapsack man
Son) corresponding to motion target area.
But user's choosing is not excluded in actual applications and treats unfairly mark target;Or, it is necessary to change target to be marked;Again
Or the result of mark is wrong, for example, when contact coordinate is positioned at the centre of two motion target areas, i.e., two motions
Target area is equal apart from contact coordinate, can randomly choose one mark of concentration, mark correct probability there was only 50%, it is easy to
There is marked erroneous.For above-mentioned a variety of situations, it is necessary to which target to be marked is reselected or re-flagged.
In one embodiment of the invention, the method shown in Fig. 1 also includes:
Instructed in response to the cancellation current markers that terminal device is sent, remove the mark in the first image;And/or response
In the mark for re-flagging instruction, removing in the first image that terminal device is sent, and re-execute the step S110 shown in Fig. 1
To step S140.
Since it is desired that re-flag, so the above-mentioned instruction that re-flags includes the target to be marked that needs re-flag
Contact coordinate.The contact coordinate for the target to be marked that the needs re-flag is that user carries out reselecting for target to be marked
When, the user that terminal device identifies clicks on contact coordinate corresponding to screen operator.
It can be seen that by the present embodiment, user can change target to be marked, or in marked erroneous, carry out again just
True mark, strengthen Consumer's Experience.
Fig. 3 is a kind of illustrative view of functional configuration of the labelling apparatus for moving target that one embodiment of the invention provides.Such as
Shown in Fig. 3, the labelling apparatus 300 of the moving target includes:
Image acquisition unit 310, for obtaining the first image of camera collection.
Recognition unit 320, for the identification instruction of the target to be marked sent in response to terminal device, from the first image
Identify one or more motion target areas;Identification instruction includes the contact coordinate of target to be marked.
Determining unit 330, for obtaining the external figure of each motion target area, according to each motion target area
The contact coordinate of external figure and target to be marked, it is determined that motion target area corresponding with target to be marked.
Indexing unit 340, for by the external figure mark for the motion target area corresponding with target to be marked determined
Note is in the first image.
In one embodiment of the invention, recognition unit 320, specifically for acquisition camera collection and the first image
Adjacent multiple second images;According to multiple second images, background image is obtained;By background image and the first image comparison, obtain
Take foreground image;Foreground image is subjected to binary conversion treatment, obtains the bianry image of foreground image;Bianry image is carried out successively
Closed operation processing, expansion process, connection processing, determine one or more of the first image motion target area.
In one embodiment of the invention, external figure is boundary rectangle.
Determining unit 330 is specifically used for, and obtains the boundary rectangle of each motion target area, it is determined that each moving target area
The centre coordinate of the boundary rectangle in domain;Calculate the centre coordinate of each boundary rectangle and the contact coordinate of target to be marked away from
From;Motion target area corresponding with the minimum value of distance is defined as motion target area corresponding with target to be marked.
Specifically, it is determined that unit 330, it is additionally operable to obtain the coordinate and bottom right vertex of the left upper apex of each boundary rectangle
Coordinate, according to the coordinate of left upper apex and the coordinate of bottom right vertex, the width and height of each boundary rectangle are calculated, according to upper left
Coordinate, width and the height on summit, calculate the centre coordinate of each boundary rectangle.
In one embodiment of the invention, the labelling apparatus 300 of the moving target shown in Fig. 3 further comprises:
Removal unit, for the cancellation current markers instruction sent in response to terminal device, remove the mark in the first image
Note;And/or notification unit, for the instruction that re-flags sent in response to terminal device, the mark in the first image is removed,
And image acquisition unit, recognition unit, determining unit, indexing unit is notified to perform corresponding function;Re-flag and wrapped in instruction
Include the contact coordinate for the target to be marked for needing to re-flag.
Fig. 4 is a kind of structural representation of the labelling apparatus for moving target that another embodiment of the invention provides.Such as Fig. 4
Shown, the labelling apparatus 400 of moving target includes memory 410 and processor 420, leads between memory 410 and processor 420
The communication connection of internal bus 430 is crossed, memory 410 is stored with the meter of the mark for the moving target that can be performed by processor 420
Calculation machine program 411, can be realized shown in Fig. 1 when the computer program 411 of the mark of the moving target is performed by processor 420
Method and step.
In various embodiments, memory 410 can be internal memory or nonvolatile memory.It is wherein non-volatile to deposit
Reservoir can be:Memory driver (such as hard disk drive), solid state hard disc, any kind of storage dish (such as CD, DVD),
Either similar storage medium or combinations thereof.Internal memory can be:RAM (Radom Access Memory, arbitrary access
Memory), volatile memory, nonvolatile memory, flash memory.Further, nonvolatile memory and internal memory are defined as machine
Readable storage medium storing program for executing, the computer program 411 of the mark of the moving target performed by processor 420 can be stored thereon.
Fig. 5 is a kind of illustrative view of functional configuration for unmanned plane that one embodiment of the invention provides.As shown in figure 5, the nothing
Man-machine 500 include the labelling apparatus 530 of camera 510, wireless connection unit 520 and moving target as shown in Figure 3 or Figure 4.
Camera 510, for gathering the first image, and the first image is sent to the labelling apparatus 530 of moving target.
Mark instructions are sent to motion mesh by wireless connection unit 520, the mark instructions sent for receiving terminal apparatus
Target labelling apparatus 530;Mark instructions include the contact coordinate of target to be marked.
The labelling apparatus 530 of moving target, the identification of the target to be marked for being sent in response to terminal device instruct, from
One or more motion target areas are identified in first image;The external figure of each motion target area is obtained, according to every
The external figure of individual motion target area and the contact coordinate of target to be marked, it is determined that moving target corresponding with target to be marked
Region;By the external pictorial symbolization for the motion target area corresponding with target to be marked determined in the first image.
It should be noted that shown in each embodiment and Fig. 1 of unmanned planes of the Fig. 3 also and shown in device and Fig. 5 shown in Fig. 4
Each embodiment of method correspond to identical, be described in detail, will not be repeated here above.
To sum up, the beneficial effect of technical scheme is:
On the one hand, the automatic mark to the region of target to be marked can be realized using technical scheme, i.e., by
System is automatically performed mark, and target to be marked is selected without the manual frame of user, and user only needs to click on target to be marked, i.e., only needs
The contact coordinate of user is obtained, it is simple, convenient;On the other hand, using the external figure of motion target area as mark
Note, can not only ensure that target to be marked is completely in marked region, more accurately can enter rower to target to be marked
Note, and the background information in marked region can also be reduced, the interference of background information in marked region is avoided, improves mark
Accuracy rate.
In order to realize the tracking of target to be marked, it is necessary to feature extraction be carried out to marked region, then in the process of tracking
In, by way of the characteristics of image in each image that camera gathers is matched with the feature extracted, it is determined that treating
The moving target of tracking.In this programme, when carrying out clarification of objective extraction to be marked, simply extract in the external figure
Feature, because the background information in the external figure is less, reduce the interference of background information, improve the accuracy of tracking, and
And in the then tracking to the target to be marked, it can ensure that object time to be marked is in the external figure.
More than, be only the present invention embodiment, under the above-mentioned teaching of the present invention, those skilled in the art can be with
Other improvement or deformation are carried out on the basis of above-described embodiment.It will be understood by those skilled in the art that above-mentioned specifically retouches
State and simply preferably explain the purpose of the present invention, protection scope of the present invention should be defined by scope of the claims.
Claims (10)
1. a kind of labeling method of moving target, it is characterised in that methods described includes:
Obtain the first image of camera collection;
The identification of the target to be marked sent in response to terminal device is instructed, and one or more is identified from described first image
Motion target area;The identification instruction includes the contact coordinate of the target to be marked;
The external figure of each motion target area is obtained, according to the external figure of each motion target area and institute
The contact coordinate of target to be marked is stated, it is determined that motion target area corresponding with the target to be marked;
The external pictorial symbolization of motion target area corresponding with the target to be marked will be determined in described first image
In.
2. the method as described in claim 1, it is characterised in that described that one or more fortune are identified from described first image
Moving-target region includes:
Obtain multiple second images adjacent with described first image of the camera collection;
According to the multiple second image, background image is obtained;
The background image and described first image are contrasted, obtain foreground image;
The foreground image is subjected to binary conversion treatment, obtains the bianry image of the foreground image;
The bianry image is carried out to closed operation processing, expansion process, connection processing successively, determines one in described first image
Individual or multiple motion target areas.
3. the method as described in claim 1, it is characterised in that
The external figure is boundary rectangle, the external figure for obtaining each motion target area, according to each institute
The external figure of motion target area and the contact coordinate of the target to be marked are stated, it is determined that corresponding with the target to be marked
Motion target area includes:
The boundary rectangle of each motion target area is obtained, it is determined that in the boundary rectangle of each motion target area
Heart coordinate;
Calculate the distance of the centre coordinate and the contact coordinate of the target to be marked of each boundary rectangle;
The motion target area corresponding with the minimum value of the distance is defined as corresponding with target to be marked to move mesh
Mark region.
4. method as claimed in claim 3, it is characterised in that the centre coordinate bag for determining each boundary rectangle
Include:
Obtain the coordinate of left upper apex and the coordinate of bottom right vertex of each boundary rectangle;
According to the coordinate of the left upper apex and the coordinate of the bottom right vertex, the width and height of each boundary rectangle of calculating
Degree;
According to the coordinate of the left upper apex, the width and the height, the centre coordinate of each boundary rectangle of calculating.
5. the method as described in claim 1, it is characterised in that methods described also includes:
Instructed in response to the cancellation current markers that the terminal device is sent, remove the mark in described first image;
And/or
The mark for re-flagging instruction, removing in described first image sent in response to the terminal device, and re-execute
First image of camera collection is obtained to the external of motion target area corresponding with the target to be marked will be determined
Step of the pictorial symbolization in described first image;It is described re-flag instruction include the target to be marked that needs re-flag
Contact coordinate.
6. a kind of labelling apparatus of moving target, it is characterised in that described device includes:
Image acquisition unit, for obtaining the first image of camera collection;
Recognition unit, for the identification instruction of the target to be marked sent in response to terminal device, know from described first image
Do not go out one or more motion target areas;The identification instruction includes the contact coordinate of the target to be marked;
Determining unit, for obtaining the external figure of each motion target area, according to each moving target
The contact coordinate of the external figure in region and the target to be marked, it is determined that moving target area corresponding with the target to be marked
Domain;
Indexing unit, for the external pictorial symbolization for determining motion target area corresponding with the target to be marked to be existed
In described first image.
7. device as claimed in claim 6, it is characterised in that the recognition unit is specifically used for,
Obtain multiple second images adjacent with described first image of the camera collection;
According to the multiple second image, background image is obtained;
The background image and described first image are contrasted, obtain foreground image;
The foreground image is subjected to binary conversion treatment, obtains the bianry image of the foreground image;
The bianry image is carried out to closed operation processing, expansion process, connection processing successively, determines one in described first image
Individual or multiple motion target areas.
8. device as claimed in claim 5, it is characterised in that
The external figure is boundary rectangle, and the determining unit is specifically used for,
The boundary rectangle of each motion target area is obtained, it is determined that in the boundary rectangle of each motion target area
Heart coordinate;
Calculate the distance of the centre coordinate and the contact coordinate of the target to be marked of each boundary rectangle;
The motion target area corresponding with the minimum value of the distance is defined as corresponding with target to be marked to move mesh
Mark region.
9. device as claimed in claim 7, it is characterised in that described device further comprises:
Removal unit, for the cancellation current markers instruction sent in response to the terminal device, remove in described first image
Mark;
And/or
Notification unit, for the mark for re-flagging instruction, removing in described first image sent in response to the terminal device
Note, and notify described image acquiring unit, the recognition unit, the determining unit, the indexing unit to perform corresponding work(
Energy;It is described to re-flag the contact coordinate for instructing and including the target to be marked that needs re-flag.
10. a kind of unmanned plane, it is characterised in that the unmanned plane includes camera, wireless connection unit and such as claim 6-9
The labelling apparatus of moving target described in any one;
The camera, for gathering the first image, and described first image is sent to the labelling apparatus of the moving target;
The wireless connection unit, the mark instructions sent for receiving terminal apparatus, the mark instructions are sent to described
The labelling apparatus of moving target;The mark instructions include the contact coordinate of the target to be marked;
The labelling apparatus of the moving target, for the identification instruction of the target to be marked sent in response to terminal device, from institute
State and one or more motion target areas are identified in the first image;The external figure of each motion target area is obtained,
According to each external figure of the motion target area and the contact coordinate of the target to be marked, it is determined that being treated with described
Mark motion target area corresponding to target;The external of motion target area corresponding with the target to be marked will be determined
Pictorial symbolization is in described first image.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710779916.5A CN107590450A (en) | 2017-09-01 | 2017-09-01 | A kind of labeling method of moving target, device and unmanned plane |
PCT/CN2017/110902 WO2019041569A1 (en) | 2017-09-01 | 2017-11-14 | Method and apparatus for marking moving target, and unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710779916.5A CN107590450A (en) | 2017-09-01 | 2017-09-01 | A kind of labeling method of moving target, device and unmanned plane |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107590450A true CN107590450A (en) | 2018-01-16 |
Family
ID=61050663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710779916.5A Pending CN107590450A (en) | 2017-09-01 | 2017-09-01 | A kind of labeling method of moving target, device and unmanned plane |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107590450A (en) |
WO (1) | WO2019041569A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626267A (en) * | 2019-09-17 | 2020-09-04 | 山东科技大学 | Hyperspectral remote sensing image classification method using void convolution |
CN114579524A (en) * | 2022-05-06 | 2022-06-03 | 成都大学 | Method and system for processing image data |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113627413A (en) * | 2021-08-12 | 2021-11-09 | 杭州海康威视数字技术股份有限公司 | Data labeling method, image comparison method and device |
CN114143561B (en) * | 2021-11-12 | 2023-11-07 | 北京中联合超高清协同技术中心有限公司 | Multi-view roaming playing method for ultra-high definition video |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509306A (en) * | 2011-10-08 | 2012-06-20 | 西安理工大学 | Specific target tracking method based on video |
CN105759839A (en) * | 2016-03-01 | 2016-07-13 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle (UAV) visual tracking method, apparatus, and UAV |
WO2017045116A1 (en) * | 2015-09-15 | 2017-03-23 | SZ DJI Technology Co., Ltd. | System and method for supporting smooth target following |
US9710709B1 (en) * | 2014-03-07 | 2017-07-18 | Trace Live Network Inc. | Cascade recognition for personal tracking via unmanned aerial vehicle (UAV) |
CN107015572A (en) * | 2014-07-30 | 2017-08-04 | 深圳市大疆创新科技有限公司 | Target tracking system and method |
US20170244937A1 (en) * | 2014-06-03 | 2017-08-24 | Gopro, Inc. | Apparatus and methods for aerial video acquisition |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2217366A1 (en) * | 1997-09-30 | 1999-03-30 | Brc Business Renewal Corporation | Facial recognition system |
CN101067866A (en) * | 2007-06-01 | 2007-11-07 | 哈尔滨工程大学 | Eagle eye technique-based tennis championship simulating device and simulation processing method thereof |
CN100487724C (en) * | 2007-10-08 | 2009-05-13 | 北京科技大学 | Quick target identification and positioning system and method |
CN103149939B (en) * | 2013-02-26 | 2015-10-21 | 北京航空航天大学 | A kind of unmanned plane dynamic target tracking of view-based access control model and localization method |
CN104777847A (en) * | 2014-01-13 | 2015-07-15 | 中南大学 | Unmanned aerial vehicle target tracking system based on machine vision and ultra-wideband positioning technology |
CN104239865B (en) * | 2014-09-16 | 2017-04-12 | 宁波熵联信息技术有限公司 | Pedestrian detecting and tracking method based on multi-stage detection |
CN104794435B (en) * | 2015-04-03 | 2017-12-29 | 中国科学院自动化研究所 | A kind of unmanned plane of view-based access control model moving target detecting method over the ground |
CN105120146B (en) * | 2015-08-05 | 2018-06-26 | 普宙飞行器科技(深圳)有限公司 | It is a kind of to lock filming apparatus and image pickup method automatically using unmanned plane progress moving object |
CN105447888B (en) * | 2015-11-16 | 2018-06-29 | 中国航天时代电子公司 | A kind of UAV Maneuver object detection method judged based on effective target |
CN105578034A (en) * | 2015-12-10 | 2016-05-11 | 深圳市道通智能航空技术有限公司 | Control method, control device and system for carrying out tracking shooting for object |
CN106981073B (en) * | 2017-03-31 | 2019-08-06 | 中南大学 | A kind of ground moving object method for real time tracking and system based on unmanned plane |
-
2017
- 2017-09-01 CN CN201710779916.5A patent/CN107590450A/en active Pending
- 2017-11-14 WO PCT/CN2017/110902 patent/WO2019041569A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509306A (en) * | 2011-10-08 | 2012-06-20 | 西安理工大学 | Specific target tracking method based on video |
US9710709B1 (en) * | 2014-03-07 | 2017-07-18 | Trace Live Network Inc. | Cascade recognition for personal tracking via unmanned aerial vehicle (UAV) |
US20170244937A1 (en) * | 2014-06-03 | 2017-08-24 | Gopro, Inc. | Apparatus and methods for aerial video acquisition |
CN107015572A (en) * | 2014-07-30 | 2017-08-04 | 深圳市大疆创新科技有限公司 | Target tracking system and method |
WO2017045116A1 (en) * | 2015-09-15 | 2017-03-23 | SZ DJI Technology Co., Ltd. | System and method for supporting smooth target following |
CN105759839A (en) * | 2016-03-01 | 2016-07-13 | 深圳市大疆创新科技有限公司 | Unmanned aerial vehicle (UAV) visual tracking method, apparatus, and UAV |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626267A (en) * | 2019-09-17 | 2020-09-04 | 山东科技大学 | Hyperspectral remote sensing image classification method using void convolution |
CN114579524A (en) * | 2022-05-06 | 2022-06-03 | 成都大学 | Method and system for processing image data |
CN114579524B (en) * | 2022-05-06 | 2022-07-15 | 成都大学 | Method and system for processing image data |
Also Published As
Publication number | Publication date |
---|---|
WO2019041569A1 (en) | 2019-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107590450A (en) | A kind of labeling method of moving target, device and unmanned plane | |
CN109685060B (en) | Image processing method and device | |
Mondéjar-Guerra et al. | Robust identification of fiducial markers in challenging conditions | |
CN108520247A (en) | To the recognition methods of the Object node in image, device, terminal and readable medium | |
CN109902541B (en) | Image recognition method and system | |
CN107527355B (en) | Visual tracking method and device based on convolutional neural network regression model | |
US20130050076A1 (en) | Method of recognizing a control command based on finger motion and mobile device using the same | |
CN108154098A (en) | A kind of target identification method of robot, device and robot | |
CN106203423B (en) | Weak structure perception visual target tracking method fusing context detection | |
EP2339507B1 (en) | Head detection and localisation method | |
CN111047626A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN103413303A (en) | Infrared target segmentation method based on joint obviousness | |
Jain et al. | Automated perception of safe docking locations with alignment information for assistive wheelchairs | |
CN114022554A (en) | Massage robot acupuncture point detection and positioning method based on YOLO | |
JP2022512165A (en) | Intersection detection, neural network training and intelligent driving methods, equipment and devices | |
CN110458857B (en) | Central symmetry primitive detection method and device, electronic equipment and readable storage medium | |
CN113269280B (en) | Text detection method and device, electronic equipment and computer readable storage medium | |
EP3352112A1 (en) | Architecture adapted for recognising a category of an element from at least one image of said element | |
EP2875488B1 (en) | Biological unit segmentation with ranking based on similarity applying a shape and scale descriptor | |
CN112655021A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN114581535B (en) | Method, device, storage medium and equipment for marking key points of user bones in image | |
CN113570535A (en) | Visual positioning method and related device and equipment | |
Shirke et al. | A novel region-based iterative seed method for the detection of multiple lanes | |
CN113192138A (en) | Robot autonomous relocation method and device, robot and storage medium | |
CN114648628A (en) | Apple maturity detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |