CN110298239A - Target monitoring method, apparatus, computer equipment and storage medium - Google Patents

Target monitoring method, apparatus, computer equipment and storage medium Download PDF

Info

Publication number
CN110298239A
CN110298239A CN201910423905.2A CN201910423905A CN110298239A CN 110298239 A CN110298239 A CN 110298239A CN 201910423905 A CN201910423905 A CN 201910423905A CN 110298239 A CN110298239 A CN 110298239A
Authority
CN
China
Prior art keywords
media information
target
identity
tracking
tracked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910423905.2A
Other languages
Chinese (zh)
Inventor
凡金龙
马进
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910423905.2A priority Critical patent/CN110298239A/en
Publication of CN110298239A publication Critical patent/CN110298239A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

This application discloses a kind of target monitoring method, apparatus, computer equipment and storage mediums, comprising: obtains the first media information;It identifies that the identity of all objects in the first media information forms identity set using deep neural network, the quantity N of object is determined according to identity set;Obtain the second media information;Duplicate removal processing is carried out to the object in the second media information;The identity of object to be tracked and the quantity M of object to be tracked are determined from the second media information after duplicate removal processing;Target tracking is judged whether there is according to the quantity M of the quantity N of all objects and the object to be tracked to lose;If there is target tracking loss, tracking lost target object identity is determined according to object to be tracked and the identity set.Aforesaid way not only embodies the convenience and intelligence of target monitoring, can also greatly improve the efficiency of quantity statistics.

Description

Target monitoring method, apparatus, computer equipment and storage medium
Technical field
The present invention relates to field of image recognition more particularly to a kind of target monitoring method, apparatus, computer equipment and storage Medium.
Background technique
In large-scale farm, the management for material culture is a time-consuming, laborious manual labor, such as is counted The quantity of material culture relies primarily on manpower progress quantity and checks, and carrying out the purpose that quantity is checked is that material culture is walked in order to prevent It loses, therefore the time interval checked can not be too long.
Milk cow would generally be moved at random in the huge farming site of area, therefore manpower checks the difficulty of quantity Degree increases, and the precision counted also will receive and seriously affect.
Summary of the invention
The embodiment of the present invention provides a kind of target monitoring method, apparatus, computer equipment and storage medium, to solve to cultivate The problem that object quantity statistics are difficult, efficiency is lower.
A kind of target monitoring method, comprising:
The first media information is obtained, first media information is photo or view comprising objects all in preset range Frequently;
Identify that the identity of all objects in first media information forms identity using deep neural network Set, the quantity N of the object is determined according to the identity set;
The second media information is obtained, second media information includes at least two frame pictures;
Duplicate removal processing is carried out to the object in second media information;
The identity of object to be tracked and described wait chase after is determined from second media information after duplicate removal processing The quantity M of track object;
According to the quantity M of the quantity N of all objects and the object to be tracked judge whether there is object with Track is lost;
If there is target tracking loss, determine what tracking was lost according to the object to be tracked and the identity set Object identity.
A kind of target monitoring device, comprising:
First obtains module, and for obtaining the first media information, first media information is includes institute in preset range There are the photo or video of object;
First processing module, for identifying all targets in first media information using deep neural network The identity of object forms identity set, and the quantity N of the object is determined according to the identity set;
Second obtains module, and for obtaining the second media information, second media information includes at least two frame pictures;
Deduplication module, for carrying out duplicate removal processing to the object in second media information;
Second processing module, for determining object to be tracked from second media information after duplicate removal processing The quantity M of identity and the object to be tracked;
First judgment module, for being sentenced according to the quantity N of all objects and the quantity M of the object to be tracked It is disconnected whether to have target tracking loss;
Third processing module, for when there is target tracking loss, according to the object to be tracked and the identity Set determines tracking lost target object identity.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing The computer program run on device, the processor realize above-mentioned target monitoring method when executing the computer program.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter Calculation machine program realizes above-mentioned target monitoring method when being executed by processor.
Above-mentioned target monitoring method, apparatus, computer equipment and storage medium are obtaining the first media information and the second matchmaker After body information, need first to carry out duplicate removal processing to the first media information and the second media information, then to two matchmakers detected The object for including in body information is compared, and the identity that object wanders away and determines object is judged whether there is, with this For according to the object quantity in statistics preset range.Can by the video or photo array object identity of acquisition, then into Row duplicate removal processing, its quantity of programming count, and target prison can be not only embodied with the trend of monitoring objective object, aforesaid way The convenience and intelligence of control, can also greatly improve the efficiency of quantity statistics.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is an application environment schematic diagram of target monitoring method in one embodiment of the invention;
Fig. 2 is a flow chart of target monitoring method in one embodiment of the invention;
Fig. 3 is another flow chart of target monitoring method in one embodiment of the invention;
Fig. 4 is another flow chart of target monitoring method in one embodiment of the invention;
Fig. 5 is another flow chart of target monitoring device in one embodiment of the invention;
Fig. 6 is another flow chart of target monitoring method in one embodiment of the invention;
Fig. 7 is another flow chart of target monitoring method in one embodiment of the invention;
Fig. 8 is a modular structure schematic diagram of target monitoring device in one embodiment of the invention;
Fig. 9 is another modular structure schematic diagram of target monitoring device in one embodiment of the invention;
Figure 10 is a schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Target monitoring method provided in an embodiment of the present invention, can be applicable in the application environment such as Fig. 1, and client (calculates Machine equipment) it is communicated with server-side by network.Computer equipment can be used to obtain the first media information and the second media letter Breath, server-side identify that the identity of all objects in the first media information forms identity set using deep neural network, and The quantity N of object is determined according to identity set;After obtaining the second media information, the every frame of second media information is drawn Object in face carries out duplicate removal processing;Object to be tracked is determined from second media information after duplicate removal processing again Identity and the object to be tracked quantity M;According to the quantity N of all objects and the object to be tracked Quantity M judge whether there is target tracking loss;If there is target tracking is lost, then according to the object to be tracked and The identity set determines tracking lost target object identity.
It should be noted that client it is mountable but be not limited to various personal computers, laptop, intelligent hand On machine, tablet computer and portable wearable device.Server can be formed with the either multiple servers of independent server Server cluster realize.
In one embodiment, as shown in Fig. 2, providing a kind of target monitoring method, the service in Fig. 1 is applied in this way It is illustrated, includes the following steps: for device
S10: obtaining the first media information, and first media information is the photo comprising objects all in preset range Or video.
The first media information in the application can be for a group obtained multiple photos of raising object multi-angled shooting or Dynamic video, comprising all objects in preset range in the video or photo that the shooting of multi-angle can guarantee, in advance If range refers to the playground for the object that needs monitor, such as passes through the milk cow or sheep in stable breeding area, video monitoring farm Group.If obtaining the first media information by way of shooting video, then needing to extract video the pre- place of frame picture Reason, the case where will necessarily being duplicated according to the object recorded between several frame pictures or multiple pictures of video extraction, In order to avoid repeat count, duplicate removal processing can be carried out between multiple pictures or several frame pictures.
S20: identifying that the identity of all objects in the first media information forms identity set using deep neural network, The quantity N of object is determined according to identity set.
Deep neural network (DNN, Deep Neural Network) is a kind of can be used for imitation human brain neural network In the neural network for carrying out large-scale image procossing, first it can be trained before application DNN identification image, it will be ready Training sample exports corresponding classification results and predicts knot by each layer of network, the classification of extraction and classifier by feature Video with some object or image conversion are specifically imaged prime matrix, then are input in network by fruit, by with identification The relevant function of element and model carry out processing and export a series of results that can reflect object identity.Utilizing a large amount of instruction It, can be by continuing to optimize (such as the weight and partially of the parameters in network when practicing sample and algorithm and carrying out learning training to DNN Set) Lai Youhua deep neural network.
Test sample is input in deep neural network in application, it will export corresponding recognition result automatically.Tool For body, neural network has the function of feature extraction and classifier, can be by the Feature Mapping of extraction to the label being set in advance On, such as the identity of the milk cow is corresponded to by the facial characteristics of certain cow head of extraction, the identity of each milk cow can be made The label classified for one.
Identity set enumerates the identity of all objects in the first media information, is represented and is distinguished with identification code in the application Each object, thus identity set can be the corresponding code group of each object at set.Further, pass through system Total quantity N of the available object of sum for the identification code that meter identity set includes.
S30: obtaining the second media information, and the second media information includes at least two frame pictures.
Second media information includes at least two frame pictures, can be dynamic video or several still photos, likewise, dynamic Video can be converted to series of frames picture, unlike the first media information, for every photo in the second media information Or the object quantity and object identity in each frame image including are not particularly limited, and are random shootings, one of which can The case where energy is as follows: that shoot in first frame picture is milk cow A, B, C, and that shoot in the second frame picture is milk cow B, C, D, Third frame picture photographing is then milk cow A, wherein the object quantity for including in picture depends on place and the angle of shooting, If including in the photo shot the density of a certain shooting location object is larger or the angle of equipment shooting is wider Object quantity it is then more.
S40: duplicate removal processing is carried out to the object in second media information.
Video or photo in second media information are shot by different angle, different shooting locations, to The picture for recording object and motion profile, the object for including by frame pictures all in the second media information or photo carry out The quantity for the object for including in the second media information can be calculated in duplicate removal.
In one embodiment, as shown in figure 3, step S40 can specifically include following steps:
S401: the quantity of the object in frame picture each in the second media information is added up, progressive total is obtained;
S402: the number of repetition of object in the second media information is obtained;
S403: progressive total is subtracted into number of repetition and obtains quantity M.
It include milk cow B, C, D in the second frame picture for example, it is assumed that including milk cow A, B, C in first frame picture, third frame is drawn The milk cow of face shooting is A, D, identifies the available mark for representing each cow head identity of each frame picture by deep neural network Know code, identification code repetition then represents milk cow and repeats, needs to carry out duplicate removal to it at this time, can be gone according to following mode Re-computation: while 3 frame pictures are compared, the milk cow quantity that 3 frame pictures occur in total is 8, and milk cow A, B, C, D have respectively repeated one It is secondary, total quantity is subtracted into the number that each cow head repeats, obtains the quantity of unduplicated all objects, i.e. M=8-1- 1-1=4.In addition to the duplicate removal calculation of foregoing description, other calculation duplicate removals can also be passed through;Present frame picture is examined The object measured has been detected by the object come with former frame picture and is compared, such as: former frame picture has been examined Milk cow A, B, C are measured, the second frame picture contains milk cow B, C, D, the milk cow sum that first frame and the second frame picture are occurred Duplicate milk cow quantity is subtracted, the milk cow sum in first frame and the second frame picture is 6, and two cow head B and C repeat 1 time, carry out duplicate removal the result is that M=6-2=4, this is first frame picture and the second frame picture duplicate removal as a result, again by its result It is compared with third frame picture, the milk cow A and D that third frame picture occurs had already appeared, and the milk before being calculated at Ox quantity suffers, therefore need not add up.
It, can be with by carrying out duplicate removal processing to the object for including in the second media information in the corresponding embodiment of Fig. 3 The quantity M of object in the second media is obtained, so that subsequent quantity N and quantity M be facilitated to be compared, it is determined whether there is object It loses.
S50: the identity of object to be tracked and described is determined from second media information after duplicate removal processing The quantity M of object to be tracked.
It is obtained without duplicate all objects, i.e., in the second media information after carrying out duplicate removal processing to the second media information These objects, then can be arranged to object to be tracked by all objects occurred, the identity of object (such as identify Code) be object to be tracked identity, the object quantity M after duplicate removal processing is the quantity M of object to be tracked.
Identifying the identification for needing to use during object and duplicate removal in the second media information all is using deep Degree neural network is identified, similar with the identification process of step S20, and specific details are not described herein again.
Further, the embodiment of the present application can using optical flow tracking method to the object to be tracked carry out respectively with Track, when the eyes of people observe moving object, the scene of object will form a series of consecutive variations on the retina of human eye Image, a series of information of this consecutive variations constantly " flowing through retina ", as the stream that light is formed, therefore are referred to as " light stream ". The principle of optical flow tracking is light stream sensor with the image of given pace continuous acquisition body surface, then to generated picture number Word matrix is analyzed, and since always there are identical features for adjacent two images, the position by comparing these features becomes Change information, can judge the mean motion of body surface feature, this analysis result is eventually transformed into two-dimensional coordinate Offset, and be stored in register in the form of pixel, realize the detection to moving object.
S60: target is judged whether there is according to the quantity M of the quantity N of all objects and the object to be tracked Object tracking is lost.
In step S20, the identity set identified by the first media information, which can be known in preset range, to be owned The quantity N of object, carrying out duplicate removal to each frame picture of the second media information in step s 40 can determine in the second media All objects occurred in information, using the quantity N of all objects as standard, according to the quantity of the object to be tracked M judges whether there is object loss.
Specifically, it is assumed that in certain cattle farm, it is A, B, C, D, E that all objects of monitoring management are needed in stable breeding region Five cow heads, then we need first to select the first media in a series of photos comprising all milk cows or video i.e. the application Then the media information is input to the extraction that depth characteristic is carried out in convolutional neural networks by information, reuse these depth spy Sign passes through hash coding and generates exclusive identification code, distributes to corresponding milk cow, convenient for distinguishing and identifying.
Arrange that multiple cameras shoot milk cow on multiple positions in stable breeding area, a series of photos or view of acquisition Frequency is the second media information in the application, and the milk cow and quantity for including in every photo or frame picture are random uncertain , as shown in the example in step S40, it is assumed that each frame picture in second media information is carried out the knot that duplicate removal obtains Fruit is tetra- cow head of A, B, C, D, then according to above-mentioned scene content it is found that in preset range all objects quantity N be 5, It is 4 by object quantity M to be tracked known to the second media information of shooting, can be inferred that according to the quantitative relation of the two There is cow head tracking to lose.
S70: if there is target tracking loss, determine that tracking is lost according to the object to be tracked and the identity set The object identity of mistake.
Continue to continue to use the example in step S60, if being inferred to have object there is a situation where tracking to lose, then can root The identity that lost target object is determined according to the object identity and identity set to be tracked that the second media information obtains, passes through depth It is A, B, C, D respectively that neural network recognization, which goes out all milk cows that the second media information takes, and actually stable breeding area at the beginning Interior monitoring is five cow head of A, B, C, D, E, it can therefore be concluded that going out is that E milk cow tracks be lost in video.
Target monitoring method in the application through the above steps can to realization of goal automation and intelligent management, according to The visual information of acquisition just can auto inventory size of animal, monitor path of animal movement, judge whether animal wanders away, significantly Human resources are saved, the accuracy for checking quantity is improved.
In one embodiment, as shown in figure 4, identifying all objects in the first media information using deep neural network Identity formed identity set, comprising:
S201: first media information is divided into the frame picture collection of each object.
Each frame picture in first media information may include multiple objects, and an object may also appear in more In frame picture, therefore we need to divide all frame pictures according to object.Such as will include from all frame pictures The frame picture of milk cow A extracts to form corresponding frame picture collection, and the frame picture in relation to B milk cow extracts to form a frame picture Collection, and so on, the first media information is formed into the corresponding frame picture collection of each object.
S202: the frame picture collection progress feature extraction of each object is obtained using deep neural network described every The depth characteristic of one object.
Frame picture centralized recording be same object different angle shooting result, using deep neural network to frame draw Each frame picture that face is concentrated carries out feature extraction, then pools together obtained depth characteristic by max-pooling, Constitute final feature.
Max Pooling is a kind of most common pondization operation of pond layer, and the effect of pond layer is to defeated in neural network The characteristic pattern entered is compressed, and on the one hand characteristic pattern is made to become smaller, and simplifies network query function complexity;On the one hand Feature Compression is carried out, Extract main feature.Max Pooling is that receptive field and the image of input are carried out convolution, then by each zone Maximizing extracts main feature.
S203: the depth characteristic of each object is generated into corresponding identification code, the mark by Hash mapping Code is used to indicate the identity of object.
It can be using the form that deep neural network carries out the depth characteristic that feature extraction obtains to object a lot of Floating number, such as -0.21046,0.55071, these indicate that the floating number of depth characteristics is produced by Hash mapping and correspond to The form of expression of identification code in character string, that is, the application of form, the character string after mapping can be the group of letter and number It closes, such as e69de2.Different target object carries out the depth characteristic difference that feature extraction obtains, and is reflected naturally by hash algorithm The character string penetrated is not also identical, therefore different character strings can represent different object identity, to distinguish and identify Object.
The embodiment of the present application generates unique identification code, identification code distribution using the biological characteristic that each object has To corresponding object, for accurately distinguishing, identifying object, to be conducive to check object progress quantity and supervise in real time Control.
Step S1, by deep neural network identify the first media information in all objects formed identity set it Before, it needs to own by arranging that multiple cameras within a preset range obtain the first media information comprising all objects Object is all objects in preset range, and media information includes the frame picture intercepted in photo or video, such as is passed through Multiple cameras obtain one section of video, it is desirable that the trace that the video must have all milk cows in stable breeding region to occur, so as to The identity of every cow head can be gone out by the video identification, as a comparison standard.
In one embodiment, as shown in figure 5, after the step s 70, i.e., according to the object to be tracked and the body Part set determines after tracking lost target object identity, further includes:
S80: obtaining third media information, and the third media information is acquired after obtaining second media information Photo or video.
Object is tracked in video there are two types of the reason of loss possibility, and one is the screenings due to other barriers or animal Gear, leads to not identify loss object in the second media information;There are also one kind the reason is that the object has left really Stable breeding region, camera are not picked up by content related with object.If finding have in the second media information of acquisition There is the case where doubtful loss in object, then need to analyze the reason of object is lost according to further video information, because This also needs to obtain third media information after obtaining the second media information.
S90: if not occurring the tracking lost target object in the third media information, determine that the tracking is lost The object of mistake has left the preset range.
If the frame picture in third media information does not occur E milk cow also, it can be determined that the milk cow has passed past shooting model The stable breeding boundary that can be monitored is enclosed, is wandered away.
S100: if occurring the tracking lost target object in the third media information, determine that the tracking is lost The object of mistake is vision loss.
If E milk cow again appears in the visual field in third media information, then illustrating the milk cow only is by other obstacles Object blocks, so that vision be caused to lose, actually the milk cow is also present in cows.Wherein, vision loss refers to target object The phenomenon that disappearing from vision within some time or period.
Whether the target monitoring method in the application can be by there is doubtful lose in the frame picture of subsequent video information again The object of mistake is come the reason of making a concrete analysis of object loss, object missing caused by different reasons has different quantity systems Meter method not only embodies the flexibility of scheme, it is thus also avoided that shooting angle leads to the problem of quantity statistics mistake, improves number Measure statistical accuracy.
In one embodiment, as shown in fig. 6, after step S90, i.e., determine the tracking lost target object from After opening the preset range, further includes:
S101: the quantity N of all objects is subtracted to the current institute of quantity statistics of the tracking lost target object State the object quantity in preset range.
After determining that object walks out stable breeding boundary i.e. preset range, need to count the object in preset range again Current preset is calculated in the quantity N of all objects in the first media information quantity for subtracting object of wandering away by quantity Object quantity in range.
S102: it is sent a warning according to the object quantity in presently described preset range.
The condition of triggering warning information can be set according to the object quantity in preset range, such as may be configured as: when Object quantity in preset range triggers warning information when being less than N, pipes or bright warning light reminds administrative staff.It needs to illustrate , the application for trigger warning information condition be not especially limited.
The embodiment of the present application, can be according to frame picture analysis related with the object after determining the identity for losing object The concrete reason for causing object to lose, tracking, which is lost, for caused by different reasons different quantity counting methods, embodies The flexibility of scheme, and improve the accuracy of statistics size of animal.And an object of the application monitoring method can pass through Object quantity in video information auto inventory stable breeding area saves a large amount of human resources, reduces management cost, improves quantity Statistical accuracy.
In one embodiment, as shown in fig. 7, according to the quantity N of all objects and the object to be tracked Quantity M judges whether there is target tracking loss
S601: if M is less than N, it is determined that there is target tracking loss.
If the quantity M of the object to be tracked obtained by the second media information of identification is less than total object quantity N, So illustrate in shooting process, there is object loss;
S602: if M is equal to N, it is determined that no target tracking is lost.
If M is equal to N, illustrates that the whole milk cows monitored at the beginning all occur in the second media information, do not have There is the case where milk cow is lost.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
In one embodiment, a kind of target monitoring device is provided, target is supervised in the target monitoring device and above-described embodiment Prosecutor method corresponds.As shown in figure 8, the target monitoring device includes the first acquisition module 10, first processing module 20, second Obtain module 30, deduplication module 40, Second processing module 50, first judgment module 60 and third processing module 70.Each function mould Detailed description are as follows for block:
First obtains module 10, and for obtaining the first media information, first media information is comprising in preset range The photo or video of all objects;
First processing module 20, for identifying the body of all objects in the first media information using deep neural network Part forms identity set, and the quantity N of object is determined according to identity set;
Second obtains module 30, and for obtaining the second media information, second media information includes at least two frame pictures;
Deduplication module 40, for carrying out duplicate removal processing to the object in second media information;
Preferably, deduplication module 40 is also used to: by the object in frame picture each in second media information Quantity add up, obtain progressive total;
Obtain the number of repetition of object described in second media information;
The progressive total is subtracted into the number of repetition and obtains the quantity M.
Second processing module 50, for determining object to be tracked from second media information after duplicate removal processing Identity and the object to be tracked quantity M;
First judgment module 60, for according to the quantity N of all objects and quantity M of the object to be tracked Judge whether there is target tracking loss;
Third processing module 70, for when there is target tracking loss, according to the object to be tracked and the body Part set determines tracking lost target object identity.
Preferably, one embodiment of the application is as shown in figure 9, first processing module 20 includes division unit 201, feature extraction Unit 202 and map unit 203.
Division unit 201, for first media information to be divided into the frame picture collection of each object;
Feature extraction unit 202, it is special for being carried out using frame picture collection of the deep neural network to each object Sign is extracted and obtains the depth characteristic of each object;
Map unit 203, for the depth characteristic of each object to be generated corresponding mark by Hash mapping Code, the identification code are used to indicate the identity of object.
Preferably, the target monitoring device further includes that third obtains module and the second judgment module.
Third obtains module, is also used to obtain third media information, and the third media information is to obtain described second The photo or video acquired after media information;
Second judgment module, when for not occurring the tracking lost target object yet in the third media information, Determine that the tracking lost target object has left the preset range;
Second judgment module is also used to occur the tracking lost target object in the third media information When, determine the tracking lost target object for vision loss.
Preferably, the target monitoring device further includes computing module and sending module.
The computing module, for the quantity N of all objects to be subtracted to the number of the tracking lost target object Amount counts the object quantity in presently described preset range;
The sending module, for the mesh according to object quantity and the tracking loss in presently described preset range Mark object identity sends a warning.
Preferably, first judgment module includes processing unit.
The processing unit, for when M is less than N, determination to have target tracking loss;
When M is equal to N, determines and lost without target tracking.
Specific about target monitoring device limits the restriction that may refer to above for target monitoring method, herein not It repeats again.Modules in above-mentioned target monitoring device can be realized fully or partially through software, hardware and combinations thereof.On Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 10.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is used to store the data used in the target monitoring method in above-described embodiment.The computer equipment Network interface is used to communicate with external terminal by network connection.To realize one kind when the computer program is executed by processor Target monitoring method.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor realize the target prison in above-described embodiment when executing computer program Prosecutor method.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program realizes the target monitoring method in above-described embodiment when being executed by processor.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of target monitoring method, which is characterized in that the target monitoring method includes:
The first media information is obtained, first media information is photo or video comprising objects all in preset range;
Identify that the identity of all objects in first media information forms identity set using deep neural network, The quantity N of the object is determined according to the identity set;
The second media information is obtained, second media information includes at least two frame pictures;
Duplicate removal processing is carried out to the object in second media information;
The identity of determining object to be tracked and the mesh to be tracked from second media information after duplicate removal processing Mark the quantity M of object;
Target tracking is judged whether there is according to the quantity M of the quantity N of all objects and the object to be tracked to lose It loses;
If there is target tracking loss, tracking lost target is determined according to the object to be tracked and the identity set Object identity.
2. target monitoring method as described in claim 1, which is characterized in that described to identify described the using deep neural network The identity of all objects in one media information forms identity set, comprising:
First media information is divided into the frame picture collection of each object;
Feature extraction, which is carried out, using frame picture collection of the deep neural network to each object obtains each object Depth characteristic;
The depth characteristic of each object is generated into corresponding identification code by Hash mapping, the identification code is used to indicate The identity of object.
3. target monitoring method as described in claim 1, which is characterized in that the target in second media information Object carries out duplicate removal processing, comprising:
The quantity of the object in frame picture each in second media information is added up, progressive total is obtained;
Obtain the number of repetition of object described in second media information;
The progressive total is subtracted into the number of repetition and obtains the quantity M.
4. target monitoring method as described in claim 1, which is characterized in that in the object to be tracked according to and institute After stating the determining tracking lost target object identity of identity set, the method also includes:
Obtain third media information, the third media information be the photo acquired after obtaining second media information or Video;
If not occurring the tracking lost target object in the third media information, the tracking lost target is determined Object has left the preset range;
If occurring the tracking lost target object in the third media information, the tracking lost target object is determined For vision loss.
5. target monitoring method as claimed in claim 4, which is characterized in that determine the tracking lost target object described After having left the preset range, the target monitoring method further include:
The quantity N of all objects is subtracted to the presently described preset range of quantity statistics of the tracking lost target object Interior object quantity;
According in presently described preset range object quantity and the tracking lost target object identity send a warning.
6. target monitoring method as described in claim 1, which is characterized in that the quantity N according to all objects Target tracking is judged whether there is with the quantity M of the object to be tracked to lose, comprising:
If M is less than N, it is determined that there is target tracking loss;
If M is equal to N, it is determined that no target tracking is lost.
7. a kind of target monitoring device, which is characterized in that the target monitoring device includes:
First obtains module, and for obtaining the first media information, first media information is to include mesh all in preset range Mark the photo or video of object;
First processing module, for identifying all objects in first media information using deep neural network Identity forms identity set, and the quantity N of the object is determined according to the identity set;
Second obtains module, and for obtaining the second media information, second media information includes at least two frame pictures;
Deduplication module, for carrying out duplicate removal processing to the object in second media information;
Second processing module, for determining the identity of object to be tracked from second media information after duplicate removal processing And the quantity M of the object to be tracked;
First judgment module, the quantity M judgement for quantity N and the object to be tracked according to all objects are It is no to have target tracking loss;
Third processing module, for when there is target tracking loss, according to the object to be tracked and the identity set Determine tracking lost target object identity.
8. target monitoring device as claimed in claim 7, which is characterized in that the first processing module includes:
Division unit, for first media information to be divided into the frame picture collection of each object;
Feature extraction unit is obtained for carrying out feature extraction using frame picture collection of the deep neural network to each object To the depth characteristic of each object;
Map unit, it is described for the depth characteristic of each object to be generated corresponding identification code by Hash mapping Identification code is used to indicate the identity of object.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to 6 described in any item target monitoring methods.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In realization such as target monitoring method as claimed in any one of claims 1 to 6 when the computer program is executed by processor.
CN201910423905.2A 2019-05-21 2019-05-21 Target monitoring method, apparatus, computer equipment and storage medium Pending CN110298239A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910423905.2A CN110298239A (en) 2019-05-21 2019-05-21 Target monitoring method, apparatus, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910423905.2A CN110298239A (en) 2019-05-21 2019-05-21 Target monitoring method, apparatus, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110298239A true CN110298239A (en) 2019-10-01

Family

ID=68026955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910423905.2A Pending CN110298239A (en) 2019-05-21 2019-05-21 Target monitoring method, apparatus, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110298239A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967306A (en) * 2020-07-02 2020-11-20 广东技术师范大学 Target remote monitoring method and device, computer equipment and storage medium
WO2021083381A1 (en) * 2019-11-01 2021-05-06 北京观海科技发展有限责任公司 Animal identity recognition method, apparatus and system
CN113422847A (en) * 2021-08-23 2021-09-21 中国电子科技集团公司第二十八研究所 Aircraft identification number unified coding method based on airborne ADS-B

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105248308A (en) * 2015-11-18 2016-01-20 谭圆圆 Grazing system based on unmanned aerial vehicle and grazing method thereof
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN108932496A (en) * 2018-07-03 2018-12-04 北京佳格天地科技有限公司 The quantity statistics method and device of object in region

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105248308A (en) * 2015-11-18 2016-01-20 谭圆圆 Grazing system based on unmanned aerial vehicle and grazing method thereof
WO2017152794A1 (en) * 2016-03-10 2017-09-14 Zhejiang Shenghui Lighting Co., Ltd. Method and device for target tracking
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN108932496A (en) * 2018-07-03 2018-12-04 北京佳格天地科技有限公司 The quantity statistics method and device of object in region

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021083381A1 (en) * 2019-11-01 2021-05-06 北京观海科技发展有限责任公司 Animal identity recognition method, apparatus and system
CN112785620A (en) * 2019-11-01 2021-05-11 北京观海科技发展有限责任公司 Animal identity identification method, device and system
CN111967306A (en) * 2020-07-02 2020-11-20 广东技术师范大学 Target remote monitoring method and device, computer equipment and storage medium
CN113422847A (en) * 2021-08-23 2021-09-21 中国电子科技集团公司第二十八研究所 Aircraft identification number unified coding method based on airborne ADS-B
CN113422847B (en) * 2021-08-23 2021-11-02 中国电子科技集团公司第二十八研究所 Aircraft identification number unified coding method based on airborne ADS-B

Similar Documents

Publication Publication Date Title
Walter et al. TRex, a fast multi-animal tracking system with markerless identification, and 2D estimation of posture and visual fields
Fuentes et al. Deep learning-based hierarchical cattle behavior recognition with spatio-temporal information
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
Baek et al. Using machine vision to analyze and classify Caenorhabditis elegans behavioral phenotypes quantitatively
CN110298239A (en) Target monitoring method, apparatus, computer equipment and storage medium
US20210368747A1 (en) Analysis and sorting in aquaculture
CN105208325B (en) The land resources monitoring and early warning method captured and compare analysis is pinpointed based on image
CN106446922B (en) A kind of crowd's abnormal behaviour analysis method
CN106940789A (en) A kind of method, system and device of the quantity statistics based on video identification
CN109644255A (en) Mark includes the method and apparatus of the video flowing of a framing
CN111598132B (en) Portrait recognition algorithm performance evaluation method and device
CN110428449A (en) Target detection tracking method, device, equipment and storage medium
CN112528823B (en) Method and system for analyzing batcharybus movement behavior based on key frame detection and semantic component segmentation
CA3230401A1 (en) Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes
Li et al. Y-BGD: Broiler counting based on multi-object tracking
CN114926897A (en) Target object statistical method, target detection method and neural network training method
Hansen et al. Non-intrusive automated measurement of dairy cow body condition using 3D video
CN112232190A (en) Method for detecting abnormal behaviors of old people facing home scene
CN115918571A (en) Fence passageway type cattle body health data extraction device and intelligent extraction method thereof
Molapo et al. Management and monitoring of livestock in the farm using deep learning
Dodel et al. Observer-independent dynamical measures of team coordination and performance
CN113326850B (en) Example segmentation-based video analysis method for group behavior of Charybdis japonica
Niknejad et al. Equine kinematic gait analysis using stereo videography and deep learning: stride length and stance duration estimation
CN108288057B (en) Portable poultry life information detection device
Li et al. Research on swine trajectory tracking algorithm based on object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination