CN111539974B - Method and device for determining track, computer storage medium and terminal - Google Patents

Method and device for determining track, computer storage medium and terminal Download PDF

Info

Publication number
CN111539974B
CN111539974B CN202010265917.XA CN202010265917A CN111539974B CN 111539974 B CN111539974 B CN 111539974B CN 202010265917 A CN202010265917 A CN 202010265917A CN 111539974 B CN111539974 B CN 111539974B
Authority
CN
China
Prior art keywords
image frame
target object
target
image frames
adjacent image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010265917.XA
Other languages
Chinese (zh)
Other versions
CN111539974A (en
Inventor
林晓明
江金陵
鲁邹尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd
Original Assignee
Beijing Mininglamp Software System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mininglamp Software System Co ltd filed Critical Beijing Mininglamp Software System Co ltd
Priority to CN202010265917.XA priority Critical patent/CN111539974B/en
Publication of CN111539974A publication Critical patent/CN111539974A/en
Application granted granted Critical
Publication of CN111539974B publication Critical patent/CN111539974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the invention realizes the judgment of whether a target object is the same object or not based on the moving position of the target object in adjacent image frames, realizes the automatic generation of the moving track of the target object based on the judgment result, and improves the activity analysis efficiency of the target object.

Description

Method and device for determining track, computer storage medium and terminal
Technical Field
The present disclosure relates to, but not limited to, multimedia technologies, and more particularly, to a method, an apparatus, a computer storage medium, and a terminal for performing trajectory determination.
Background
With the improvement of living standard of people, sanitary safety becomes more and more important, food safety is an important component in sanitary safety, and kitchen sanitary safety is an important component in food safety. In kitchen sanitation and safety, certain organisms such as mice can bring dangerous sanitation and safety problems; restaurant kitchens as consumer-oriented places of business, monitoring of such creatures is an important task.
Taking a restaurant kitchen as an example, a camera used in the restaurant kitchen is generally a fixed camera, and the background change is small; in the related technology, the mouse in the video is detected and positioned mainly by combining a moving object detection model and an image classification model which are modeled by a mixed Gaussian background, and the processing process roughly comprises the following steps: 1. detecting a moving object and the position of the moving object in the video through a moving object detection model; 2. classifying the detected images containing the moving objects by using a picture classification model, and determining the contained moving objects to be images of mice; 3. and determining mice appearing in the video according to the output result of the image classification model, and positioning the mice.
The method only detects and positions the living beings, and the user analyzes the activity information through the biological detection and positioning results, so that the method for artificially analyzing the biological track has low efficiency and long time consumption, and influences the analysis efficiency of the user on the track of the biological activity.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a method and a device for determining a track, a computer storage medium and a terminal, which can be used for determining the mouse activity track.
The embodiment of the invention provides a method for realizing track determination, which comprises the following steps:
determining the moving position of a target object for an image frame containing the target object in a video;
judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
On the other hand, an embodiment of the present invention further provides a computer storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for implementing trajectory determination is implemented.
In another aspect, an embodiment of the present invention further provides a terminal, including: a memory and a processor, the memory having a computer program stored therein; wherein, the first and the second end of the pipe are connected with each other,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements the method of trajectory determination as described above.
In another aspect, an embodiment of the present invention further provides a device for implementing track determination, including: the device comprises a position determining unit, a judging unit and a track generating unit; wherein the content of the first and second substances,
the position determining unit is configured to: determining the moving position of each target object for the image frame containing the target object in the video;
the judgment unit is used for: judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
the track generation unit is used for: and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
The application includes: determining the moving position of a target object for an image frame containing the target object in a video; judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames; and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object. The embodiment of the invention realizes the judgment of whether the target objects are the same object or not based on the moving positions of the target objects in the adjacent image frames, realizes the automatic generation of the moving track of the target objects based on the judgment result, and improves the moving analysis efficiency of the target objects.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart of a method for implementing trajectory determination according to an embodiment of the present invention;
FIG. 2 is a block diagram of an apparatus for determining a trajectory according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an exemplary activity location of an application of the present invention;
fig. 4 is a schematic diagram of an exemplary image frame to which the present invention is applied.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The inventor of the application analyzes and finds that when a plurality of mice are contained in an image, the related technology does not analyze and process the movement track of each mouse, but in places including a kitchen of a restaurant, a plurality of mice generally appear, and how to determine the movement track of the mice becomes a problem to be solved.
Fig. 1 is a flowchart of a method for determining a track according to an embodiment of the present invention, as shown in fig. 1, including:
step 101, determining the moving position of a target object for an image frame containing the target object in a video;
it should be noted that, in the embodiment of the present invention, the moving position of the target object may be determined by referring to an image segmentation algorithm in the related art; taking a target object as a mouse as an example, the embodiment of the invention can segment the image of the position covered by the mouse through an image segmentation algorithm as the moving position of the mouse after the mouse is identified.
In one exemplary embodiment, the target object may include a mouse. When the target object is a mouse, the video in an embodiment may include a night video collected based on an infrared camera.
102, judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
in one exemplary embodiment, the determining whether the target objects in the adjacent image frames are the same object includes:
determining the distance between target objects in adjacent image frames according to the position information of the moving position;
judging whether the target objects in the adjacent image frames are the same object or not according to the determined distance between the target objects in the adjacent image frames;
wherein, the activity position is a preset geometric figure region position; the location information includes: and defining reference point coordinate information of the determined activity position, size information of the activity position and size information of the image frame based on a preset coordinate system and the reference point.
It should be noted that the active position in the embodiment of the present invention may be a geometric area as known to those skilled in the art as follows: circular, oval, rectangular, square, triangular, and the like.
The embodiment of the invention realizes the determination of the moving track of the target object based on the moving position of the target object in the adjacent image frames.
In one exemplary embodiment, taking the active position as a rectangular position as an example, the distance between the target objects in adjacent image frames can be determined by equation (1):
Figure BDA0002441233320000041
wherein d _ center _ x represents an absolute value of a horizontal coordinate difference value of a center point of the moving position of the target object in a subsequent image frame and a previous image frame in adjacent image frames; d _ center _ y represents an absolute value of a difference value of vertical coordinates of a center point of the moving position of the target object in a subsequent image frame and a previous image frame in the adjacent image frames; dx1 represents the length of the moving position of the target object in the preceding image frame in the adjacent image frames; dx2 represents the length of the moving position of the target object in the following image frame among the adjacent image frames; dy1 represents the width of the moving position of the target object in the previous image frame in the adjacent image frame; dy2 denotes the width of the moving position of the target object in the following image frame in the adjacent image frame.
In one exemplary embodiment, determining whether the target objects in adjacent image frames are the same object comprises:
when the distance between the target objects in the adjacent image frames is smaller than a preset distance threshold value, determining that the target objects in the adjacent image frames are the same object;
and when the distance between the target objects in the adjacent image frames is greater than or equal to the distance threshold value, determining that the target objects in the adjacent image frames are different objects.
It should be noted that, in the embodiment of the present invention, the distance threshold may be set by a person skilled in the art based on the number of frames of the picture transmitted per second of the video, and the larger the number of frames is, the smaller the threshold is; in one exemplary implementation, a threshold of 1 may be set for a video with 18 frames of picture frames transmitted per second.
103, generating a moving track of each target object according to a judgment result of whether the target objects in adjacent image frames are the same object;
in one exemplary embodiment, generating the movement trajectory of each target object includes:
when the target object in the adjacent image frame is different, setting the moving position of the target object in the subsequent image frame as the initial position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, the moving position of the target object in the subsequent image frame is added to the moving track of the target object in the previous image frame.
According to the embodiment of the invention, whether the target objects are the same object or not is judged based on the moving positions of the target objects in the adjacent image frames, the moving track of the target objects is automatically generated based on the judgment result, and the moving analysis efficiency of the target objects is improved.
In an exemplary embodiment, when the target objects in the adjacent image frames are the same object, the method in the embodiment of the present invention further includes:
determining whether the number of target objects in a subsequent image frame that are determined to be the same object as the target object in a previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the movable positions of the target objects which are the same as the target object in the previous image frame into one movable position through a preset fusion function.
In one exemplary embodiment, where the active locations are rectangular regions, the fusion function (also called aggregation function, merge function) may include merge (R1, R2, \8230;) R1, R2 \8230; each active location of the target object being the same object as in the previous image frame, respectively, merge (R1, R2 \8230;) representing the smallest rectangular region containing both R1, R2 \8230.
In the related art, when the mobile detection algorithm is used for detection, different body parts of a mouse can be respectively determined as a mouse, for example, the head of the mouse is detected as a mouse, and the tail of the mouse is detected as a mouse; in the embodiment of the invention, if one mouse is identified as two mice in the process of detecting the mouse, the two mice can be integrated into one mouse by judging whether the two mice are the same object or not, and the moving positions belonging to the same mouse are integrated by means of the fusion function, so that the precision of mouse detection is improved, and the analysis of the moving track of the mouse is prevented from being influenced.
In an exemplary embodiment, before determining the moving position of each target object, the method of the embodiment of the present invention further includes:
and determining image frames containing the target object in the video.
It should be noted that, in the embodiments of the present invention, the detection of the target object may be implemented with reference to the related art; for example, detecting moving objects in video image frames by a moving object detection model; and classifying the detected image frames containing the moving object by using a picture classification model, and determining whether the moving object is a target object.
The embodiment of the present invention further provides a computer storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method for implementing trajectory determination is implemented.
An embodiment of the present invention further provides a terminal, including: a memory and a processor, wherein the memory stores and stores computer programs; wherein the content of the first and second substances,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by a processor, implements a method of implementing trajectory determination as described above.
Fig. 2 is a block diagram of a device for determining a track according to an embodiment of the present invention, as shown in fig. 2, including: the device comprises a position determining unit, a judging unit and a track generating unit; wherein the content of the first and second substances,
the determine location unit is to: determining the moving position of each target object for the image frame containing the target object in the video;
the judgment unit is used for: judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
the track generation unit is used for: and generating the moving track of each target object according to the judgment result of whether the target objects in the adjacent image frames are the same object.
In an exemplary embodiment, the determining unit is specifically configured to:
determining the distance between target objects in adjacent image frames according to the position information of the moving position;
judging whether the target objects in the adjacent image frames are the same object or not according to the determined distance between the target objects in the adjacent image frames;
wherein the activity position is a preset geometric figure region position; the location information includes: and defining reference point coordinate information of the determined activity position, size information of the activity position and size information of the image frame based on a preset coordinate system and the reference point.
In an exemplary embodiment, the determining unit is configured to determine whether the target objects in the adjacent image frames are the same object, and includes:
when the distance between the target objects in the adjacent image frames is smaller than a preset distance threshold value, determining that the target objects in the adjacent image frames are the same object;
and when the distance between the target objects in the adjacent image frames is greater than or equal to the distance threshold value, determining that the target objects in the adjacent image frames are different objects.
In an exemplary embodiment, the track generation unit is specifically configured to:
when the target objects in the adjacent image frames are different objects, setting the moving position of the target object in the subsequent image frame as the initial position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, the moving position of the target object in the subsequent image frame is added to the moving track of the target object in the previous image frame.
In an exemplary embodiment, the generate trajectory unit is further operable to:
determining whether the number of target objects in the subsequent image frame that are determined to be the same object as the target object in the previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the movable positions of the target objects which are the same as the target object in the previous image frame into one movable position through a preset fusion function.
In an exemplary embodiment, an embodiment of the present invention further comprises determining an image unit for:
and determining image frames containing the target object in the video.
According to the embodiment of the invention, whether the target objects are the same object or not is judged based on the moving positions of the target objects in the adjacent image frames, the moving track of the target objects is automatically generated based on the judgment result, and the moving analysis efficiency of the target objects is improved.
The method of the embodiment of the present invention is briefly described by the application examples, which are only used to illustrate the present invention and are not used to limit the protection scope of the present invention.
Application example
In order to facilitate the presentation of the application example, the following description is given of the definitions referred to for the application example: the moving positions of the mice are represented by rectangular areas R, and the moving positions of the mice in different image frames are respectively distinguished by digital codes. Fig. 3 is a schematic diagram of an exemplary activity location applied to the present invention, and as shown in fig. 3, the location information of the rectangular area can be represented by the following coordinates: (x, y, dx, dy, w, h); wherein, w, h respectively represent the length and width of the video image that the camera was gathered, and x, y, dx and dy respectively represent the abscissa, the ordinate and the length and width of rectangle area of the upper left corner of active position.
The application example is that the target object is a mouse, and a video image at night in a kitchen is acquired through an installed infrared camera; moving objects in the video are detected using a hybrid gaussian model on the captured video. The main principle of Gaussian mixture background modeling is to construct a background in a video, and then for a picture of each frame, on one hand, the picture and the background are subjected to difference detection so as to detect a foreground in the picture, and the foreground is judged to be a moving object;
the application example obtains the image classification model through training in the following way: in the training stage, whether a moving object is a mouse or not is marked to obtain a training sample; inputting the obtained training sample into a deep learning convolutional neural network, and training to obtain an image classification model; the convolutional neural network of the application example may include: deepening the network layer number (ResNet), dense convolutional network (Densenet), computer vision group (VGG), and the like; after the image classification model is obtained through training, the image frame of the mouse as the moving object is determined by classifying each frame image (image frame) in the video.
For an image frame of which the moving object is a mouse, obtaining the activity position of the mouse through image segmentation, wherein the activity position of the application example is represented by the rectangular region R (x, y, dx, dy, w, h) defined above;
in the present application example, the distance between rats in adjacent image frames is determined by the following formula:
Figure BDA0002441233320000091
wherein dx1 represents the length of the active position of the mouse in the preceding image frame in the adjacent image frame; dx2 represents the length of the mouse's active position in the following image frame in the adjacent image frames; dy1 represents the width of the moving position of the mouse in the preceding image frame in the adjacent image frames; dy2 denotes the width of the mouse's active position in the next image frame among the adjacent image frames, and d _ center _ x is used to denote that in the adjacent image frames: the absolute value of the difference value of the horizontal coordinates of the central points of the moving positions of the mouse in the rear image frame and the front image frame is calculated by the following formula:
Figure BDA0002441233320000092
d _ center _ y is used to represent the neighboring image frame: the absolute value of the difference value of the vertical coordinates of the central point of the moving position of the mouse in the rear image frame and the previous image frame is calculated by the following formula:
Figure BDA0002441233320000093
after the distance between the rats in the adjacent image frames is calculated according to the formula, when the distance between the rats in the adjacent image frames calculated by the application example is smaller than a preset distance threshold value, determining that the rats in the two image frames are the same rat; when the distance between the mice in the adjacent image frames is larger than or equal to a preset distance threshold value, determining that the mice in the two image frames are not the same mouse; it should be noted that, when the image frames include a plurality of mice, the application example selects one mouse each time from the adjacent image frames, and determines whether the same mouse is the same mouse according to the corresponding moving position;
when the rats in the adjacent image frames are different rats, setting the moving position of the rat in the subsequent image frame as the initial position of the moving track of the rat;
when the mouse in the adjacent image frames is the same mouse, the moving position of the mouse in the following image frame is added to the moving trajectory of the mouse in the preceding image frame.
In the present application example, when the number of mice determined to be the same as the mouse in the previous image frame in the subsequent image frame is larger than 1, all the active positions determined to be the same as the mouse in the previous image frame are integrated into one active position by the fusion function.
FIG. 4 is a schematic diagram of an exemplary image frame of the present invention, FIG. 4 includes 8 image frames, and the mouse is shown with a solid dot; generating an activity track of the first mouse according to the activity position R1 of the mouse, wherein if the activity track is A, A = [ R1] (the track A is the first activity track point of the current mouse); two mice appear in the second frame image, and the moving positions of the two mice are assumed to be R2 and R3 respectively; in the application example, R2 or R3 is randomly selected, the distance from R1 is calculated, and the distance is determined according to a distance threshold: whether the mice located at R2 and R3 and the mice located at R1 are the same mouse; assuming that the mice located at R2 and R3 and the mouse located at R1 are determined to be the same mouse, the present application example may integrate R2 and R3 to obtain an active site including both R2 and R3, and assuming that the active site obtained by integration is R3, add R3 to the trajectory a to obtain a = [ R1, = R3]; in the application example, the mice at R2 and R3 are the same mouse and can be detected by a movement detection algorithm; for example, in the same frame of image, the head and the tail of a mouse move, but the body does not move, the head of the mouse may be detected as one mouse by the movement detection algorithm, and the tail of the mouse may be detected as one mouse; since the rats located at R2 and R3 are the same frame image, the present application example is not suitable for retaining both in the activity trace a. For R4 in the third frame image, calculating a distance between R4 and R3, and assuming that the distance between R4 and R3 is smaller than a distance threshold, adding R4 to the trajectory a to obtain a = [ R1, = R3, R4]; two mice with the activity positions of R5 and R6 appear in the fourth frame of image, and the distance between R5 and R4 and the distance between R6 and R4 are calculated; assuming that the distance between R5 and R4 is less than the distance threshold, and the distance between R6 and R4 is greater than the distance threshold; adding R5 to the activity track A to obtain A = [ R1, × R3, R4, R5], generating a new activity track B for recording the activity track of the detected second mouse, and counting as B = [ R6]; two mice with the activity positions of R7 and R8 appear in the fifth frame of image, and the distances between R7 and R5, between R7 and R6, between R8 and R6, and between R8 and R6 are calculated; assuming that R7 is only less than the distance threshold from R5 and R8 is only less than the distance threshold from R6, then R7 and R8 are added to the active traces a and B, respectively, yielding a = [ R1, × R3, R4, R5, R7] and B = [ R6, R8].
The following exemplifies the processing of the application example according to the design concept of the programming language:
a fusion function merge (R1, R2, \8230;) = (new _ x, new _ y, new _ dx, new _ dy, w, h)) defining a plurality of activity positions represents the smallest rectangular area containing R1, R2 \8230.
Assuming a track data track = [ R1, R2, \8230;, rk ], then the distance of one active position Rk +1 from the active track is: distance of the newly added active position Rk in the active trajectory from Rk + 1:
distance(track,Rk+1)=distance(Rk,Rk+1);
assuming that the video has n frames of images in total; in the ith frame image, m (i) mice are in the active positions, and R (ij) represents the mouse in the jth active position in the ith frame image.
Defining all active traces as tracks = [ ] ([ ] denotes an empty list);
defining the maximum distance between adjacent R in the track as dis _ threshold;
definition [ a, b ] + [ c ] = [ a, b, c ];
assuming that s = [ a, b, c ], then len(s) =3, s 2] = a, s [1] = b, s [2] = c is defined.
The first step is as follows: input i =0;
the second step: i = i +1, if i > n, the calculation is ended;
the third step: let the number of tracks be lt = len (tracks), i.e. tracks = [ track _1, track _2,., track _ lt ];
the fourth step: setting a temporary screenshot temp _ R = [ ], and a temporary new _ tracks = [ ];
the fifth step: input j =0;
and a sixth step: j = j +1, if j > m (i), go back to the second step;
the seventh step: if tracks = [ ], defining R (ij) as a new track new _ track = [ R (ij) ];
let temp _ R = temp _ R + [ lt +1], lt = lt +1, new tracks = new tracks + [ new _ tracks ].
Otherwise, sequentially calculating the distances between R (ij) and all the active tracks in the tracks. Let R (ij) be the closest distance to the kth active track, which is dis _ min. If dis _ min < = dis _ threshold, then temp _ rect = temp _ rect + [ k ] is defined;
otherwise, calculating the shortest distance between R (ij) and all the active tracks in the new _ tracks, and assuming that the distance from the h-th new _ track is shortest, which is dis _ min1. If dis _ min1< = dis _ threshold, then temp _ R = temp _ R + [ len (tracks) + h ], new _ track [ h ] = [ merge (new _ track [ h ] [0], R (ij)) ]isdefined. (the moving positions of the same mouse in the same frame are merged);
otherwise, R (ij) is a new track new _ track = [ R (ij) ]. Let temp _ R = temp _ R + [ lt +1], lt = lt +1, new \utracks = new \utracks + [ new _ tracks ]. (divide multiple mouse trajectories).
Eighth step: the new active position in the current image frame is added as the original trajectory. That is, for several identical elements smaller than len (tracks) in temp _ R, for example, assuming that the values of 1 st and 2 nd in temp _ R are all 1, then track _1= track \u1 + merge (R (i 1), R (i 2)), and for an active position smaller than len (tracks) where temp _ R appears only once, for example, assuming that 2 in temp _ R appears only once and appears at the 3 rd position, then track _2= track \u2 + R (i 3).
The ninth step: new active traces are added to tracks, which = tracks + new _ tracks.
The tenth step: and returning to the second step.
"one of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art. ".

Claims (8)

1. A method of implementing trajectory determination, comprising:
determining the moving position of a target object for an image frame containing the target object in a video;
judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
generating a moving track of each target object according to a judgment result of whether the target objects in adjacent image frames are the same object;
the generating of the moving track of each target object includes:
when the target objects in the adjacent image frames are different objects, setting the moving position of the target object in the subsequent image frame as the initial position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, adding the moving position of the target object in the subsequent image frame to the moving track of the target object in the previous image frame;
when the target objects in the adjacent image frames are the same object, the method further comprises:
determining whether the number of target objects in the subsequent image frame that are determined to be the same object as the target object in the previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the movable positions of the target objects which are the same as the target object in the previous image frame into one movable position through a preset fusion function.
2. The method of claim 1, wherein the determining whether the target objects in the adjacent image frames are the same object comprises:
determining the distance between target objects in the adjacent image frames according to the position information of the moving positions;
judging whether the target objects in the adjacent image frames are the same object or not according to the determined distance between the target objects in the adjacent image frames;
the movable position is a preset geometric figure region position; the location information includes: defining reference point coordinate information of the determined moving position, size information of the moving position and size information of the image frame based on a preset coordinate system and a reference point.
3. The method of claim 2, wherein the active position is a rectangular position and the distance between the target objects in the adjacent image frames is determined by the following formula:
Figure FDA0003684278520000021
wherein the d _ center _ x is used for representing that in the adjacent image frame: absolute value of difference of horizontal coordinates of center point of moving position of target object in the following image frame and the preceding image frame; the d _ center _ y is used for representing that in the adjacent image frame: the absolute value of the difference value of the vertical coordinates of the central point of the moving position of the target object in the subsequent image frame and the previous image frame; the dx1 represents a length of an active position of the target object in a previous image frame among the adjacent image frames; the dx2 represents a length of an active position of the target object in a following image frame among the adjacent image frames; the dy1 represents a width of the moving position of the target object in a previous image frame among the adjacent image frames; the dy2 represents a width of the moving position of the target object in a later image frame among the adjacent image frames.
4. The method of claim 2, wherein the determining whether the target objects in the adjacent image frames are the same object comprises:
when the distance between the target objects in the adjacent image frames is smaller than a preset distance threshold value, determining that the target objects in the adjacent image frames are the same object;
when the distance between the target objects in the adjacent image frames is larger than or equal to the distance threshold value, determining that the target objects in the adjacent image frames are different objects.
5. The method according to any one of claims 1 to 3, wherein prior to determining the active position of each target object, the method further comprises:
determining an image frame in the video that includes the target object.
6. A computer storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method of carrying out a trajectory determination as claimed in any one of claims 1 to 5.
7. A terminal, comprising: a memory and a processor, the memory having a computer program stored therein; wherein, the first and the second end of the pipe are connected with each other,
the processor is configured to execute the computer program in the memory;
the computer program, when executed by the processor, implements a method of implementing trajectory determination as recited in any one of claims 1 to 5.
8. An apparatus for implementing trajectory determination, comprising: the device comprises a position determining unit, a judging unit and a track generating unit; wherein the content of the first and second substances,
the position determining unit is configured to: determining the moving position of each target object in an image frame containing the target object in the video;
the judgment unit is used for: judging whether the target objects in the adjacent image frames are the same object or not according to the moving positions of the target objects in the adjacent image frames;
the track generation unit is used for: generating a moving track of each target object according to a judgment result of whether the target objects in adjacent image frames are the same object;
the generating of the moving track of each target object includes:
when the target objects in the adjacent image frames are different objects, setting the moving position of the target object in the subsequent image frame as the starting position of the moving track of the target object;
when the target objects in the adjacent image frames are the same object, adding the moving position of the target object in the subsequent image frame to the moving track of the target object in the previous image frame;
when the target objects in the adjacent image frames are the same object, the method further comprises:
determining whether the number of target objects in a subsequent image frame that are determined to be the same object as the target object in a previous image frame is greater than 1;
and when the number of the target objects which are judged to be the same as the target object in the previous image frame in the later image frame is more than 1, integrating all the moving positions of the target objects which are the same as the target object in the previous image frame into one moving position through a preset fusion function.
CN202010265917.XA 2020-04-07 2020-04-07 Method and device for determining track, computer storage medium and terminal Active CN111539974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010265917.XA CN111539974B (en) 2020-04-07 2020-04-07 Method and device for determining track, computer storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010265917.XA CN111539974B (en) 2020-04-07 2020-04-07 Method and device for determining track, computer storage medium and terminal

Publications (2)

Publication Number Publication Date
CN111539974A CN111539974A (en) 2020-08-14
CN111539974B true CN111539974B (en) 2022-11-11

Family

ID=71980444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010265917.XA Active CN111539974B (en) 2020-04-07 2020-04-07 Method and device for determining track, computer storage medium and terminal

Country Status (1)

Country Link
CN (1) CN111539974B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646253A (en) * 2013-12-16 2014-03-19 重庆大学 Bus passenger flow statistics method based on multi-motion passenger behavior analysis
CN103929685A (en) * 2014-04-15 2014-07-16 中国华戎控股有限公司 Video abstract generating and indexing method
CN104966045A (en) * 2015-04-02 2015-10-07 北京天睿空间科技有限公司 Video-based airplane entry-departure parking lot automatic detection method
CN108664912A (en) * 2018-05-04 2018-10-16 北京学之途网络科技有限公司 A kind of information processing method, device, computer storage media and terminal
CN109886999A (en) * 2019-01-24 2019-06-14 北京明略软件系统有限公司 Location determining method, device, storage medium and processor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646253A (en) * 2013-12-16 2014-03-19 重庆大学 Bus passenger flow statistics method based on multi-motion passenger behavior analysis
CN103929685A (en) * 2014-04-15 2014-07-16 中国华戎控股有限公司 Video abstract generating and indexing method
CN104966045A (en) * 2015-04-02 2015-10-07 北京天睿空间科技有限公司 Video-based airplane entry-departure parking lot automatic detection method
CN108664912A (en) * 2018-05-04 2018-10-16 北京学之途网络科技有限公司 A kind of information processing method, device, computer storage media and terminal
CN109886999A (en) * 2019-01-24 2019-06-14 北京明略软件系统有限公司 Location determining method, device, storage medium and processor

Also Published As

Publication number Publication date
CN111539974A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN106951847B (en) Obstacle detection method, apparatus, device and storage medium
CN108876791B (en) Image processing method, device and system and storage medium
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN103093212B (en) The method and apparatus of facial image is intercepted based on Face detection and tracking
CN110675407B (en) Image instance segmentation method and device, electronic equipment and storage medium
CN110472599B (en) Object quantity determination method and device, storage medium and electronic equipment
CN111126165B (en) Black smoke vehicle detection method and device and electronic equipment
CN111027370A (en) Multi-target tracking and behavior analysis detection method
CN111582032A (en) Pedestrian detection method and device, terminal equipment and storage medium
CN110795975B (en) Face false detection optimization method and device
KR20210099450A (en) Far away small drone detection method Using Deep Learning
CN112862845A (en) Lane line reconstruction method and device based on confidence evaluation
CN108710879B (en) Pedestrian candidate region generation method based on grid clustering algorithm
CN110969045A (en) Behavior detection method and device, electronic equipment and storage medium
CN112308879A (en) Image processing apparatus, method of tracking target object, and storage medium
CN112862856A (en) Method, device and equipment for identifying illegal vehicle and computer readable storage medium
CN110544268B (en) Multi-target tracking method based on structured light and SiamMask network
KR101690050B1 (en) Intelligent video security system
CN111539974B (en) Method and device for determining track, computer storage medium and terminal
CN112070035A (en) Target tracking method and device based on video stream and storage medium
CN111784750A (en) Method, device and equipment for tracking moving object in video image and storage medium
CN111428626A (en) Moving object identification method and device and storage medium
JP6893812B2 (en) Object detector
CN112036363B (en) Smoking behavior detection method and device
JP7349290B2 (en) Object recognition device, object recognition method, and object recognition program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230811

Address after: 200232 unit 5b06, floor 5, building 2, No. 277, Longlan Road, Xuhui District, Shanghai

Patentee after: Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co.,Ltd.

Address before: 100084 a1002, 10th floor, building 1, yard 1, Zhongguancun East Road, Haidian District, Beijing

Patentee before: MININGLAMP SOFTWARE SYSTEMS Co.,Ltd.

TR01 Transfer of patent right