CN109472809B - Target identification method and device - Google Patents

Target identification method and device Download PDF

Info

Publication number
CN109472809B
CN109472809B CN201710794884.6A CN201710794884A CN109472809B CN 109472809 B CN109472809 B CN 109472809B CN 201710794884 A CN201710794884 A CN 201710794884A CN 109472809 B CN109472809 B CN 109472809B
Authority
CN
China
Prior art keywords
target object
video frame
identified
video
characteristic parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710794884.6A
Other languages
Chinese (zh)
Other versions
CN109472809A (en
Inventor
李阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN201710794884.6A priority Critical patent/CN109472809B/en
Publication of CN109472809A publication Critical patent/CN109472809A/en
Application granted granted Critical
Publication of CN109472809B publication Critical patent/CN109472809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The embodiment of the invention provides a target identification method and device, which are used for solving the technical problem that a target to be tracked is easy to lose in the prior art. The target identification method comprises the following steps: extracting at least three video frames from the acquired video stream, and extracting the characteristic parameters of the target object in each video frame to obtain a characteristic parameter set; wherein the at least three video frames are from at least three different cameras; extracting characteristic parameters of an object to be identified in a first video frame, and determining the distance from the object to be identified to a corresponding camera; the first video frame is any one other video frame except the at least three video frames; and identifying the target object in the first video frame according to the distance, the characteristic parameters and the characteristic parameter set.

Description

Target identification method and device
Technical Field
The invention relates to the technical field of video monitoring, in particular to a target identification method and device.
Background
The intelligent video monitoring is a method for actively monitoring a target, a field image monitored by a camera is analyzed and processed in real time through a video analysis technology, once an emergency occurs, the target (such as a pedestrian and a vehicle) can be locked, and the target is tracked by controlling the direction, the visual angle and the like of the camera.
The core of intelligent video monitoring is a target detection and tracking method, and the current target identification method mainly comprises the steps of carrying out pure video technical analysis on a video shot by a camera and determining a target to be tracked from the shot video according to the characteristics of the target for tracking.
However, the characteristics of the target may change with the change of the shooting environment, for example, in the case of poor lighting conditions or in the case that the target to be tracked is blocked by other objects, the characteristics of the target often change, which may cause interruption or error of target tracking, or even cause the target to be tracked to be lost.
It can be seen that the current target identification method has poor adaptability to the shooting environment, which easily causes the target to be tracked to be lost.
Disclosure of Invention
The embodiment of the invention provides a target identification method and device, which are used for solving the technical problem that a target to be tracked is easy to lose in the prior art.
In a first aspect, an embodiment of the present invention provides an object authentication method, where the object authentication method includes:
extracting at least three video frames from the acquired video stream, and extracting the characteristic parameters of the target object in each video frame to obtain a characteristic parameter set; wherein the at least three video frames are from at least three different cameras;
extracting characteristic parameters of an object to be identified in a first video frame, and determining the distance from the object to be identified to a corresponding camera; the first video frame is any one other video frame except the at least three video frames;
and identifying the target object in the first video frame according to the distance, the characteristic parameters and the characteristic parameter set.
In one possible implementation, extracting at least three video frames from the captured video stream, and extracting feature parameters of the target object in each video frame includes:
respectively acquiring a foreground image of each video frame;
and respectively extracting characteristic parameters of the target object from each acquired foreground image, wherein the characteristic parameters at least comprise one or any combination of colors and textures.
In one possible implementation, identifying the target object in the first video frame according to the distance, the feature parameter, and the feature parameter set includes:
judging whether the characteristic parameters of the object to be identified are matched with the characteristic parameter set;
if so, determining the object to be authenticated as the target object;
otherwise, when the distance is judged to be within a preset distance range, determining the identified object as the target object, and adding unmatched characteristic parameters into the characteristic parameter set; otherwise, the target object is continuously identified in other video frames except the first video.
In one possible implementation manner, identifying the target object in the first video frame according to the distance, the feature parameter, and the feature parameter set further includes:
and continuously identifying the target object in other video frames except the first video frame.
In one possible implementation, the object authentication method further includes:
determining the motion track of the target object according to the activity information of the target object and the position coordinates of the at least three different cameras;
wherein the activity information includes time, speed and moving direction information of the target object passing a certain location.
In a second aspect, an embodiment of the present invention provides an object authentication apparatus including:
the extraction unit is used for extracting at least three video frames from the acquired video stream, and extracting the characteristic parameters of the target object in each video frame to obtain a characteristic parameter set; wherein the at least three video frames are from at least three different cameras;
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for extracting characteristic parameters of an object to be identified in a first video frame and determining the distance from the object to be identified to a corresponding camera; the first video frame is any one other video frame except the at least three video frames;
and the identification unit is used for identifying the target object in the first video frame according to the distance, the characteristic parameter and the characteristic parameter set.
In a possible implementation manner, the extracting unit is specifically configured to:
respectively acquiring a foreground image of each video frame;
and respectively extracting characteristic parameters of the target object from each acquired foreground image, wherein the characteristic parameters at least comprise one or any combination of colors and textures.
In a possible implementation manner, the authentication unit is specifically configured to:
judging whether the characteristic parameters of the object to be identified are matched with the characteristic parameter set;
if so, determining the object to be authenticated as the target object;
otherwise, when the distance is judged to be within the preset distance range, the identified object is determined to be the target object, and unmatched characteristic parameters are added to the characteristic parameter set.
In one possible implementation, the authentication unit is further configured to:
and continuously identifying the target object in other video frames except the first video frame.
In a possible implementation manner, the determining unit is further configured to:
determining the motion track of the target object according to the activity information of the target object and the position coordinates of the at least three different cameras;
wherein the activity information includes time, speed and moving direction information of the target object passing a certain location.
In a third aspect, an embodiment of the present invention provides a computer device, including:
at least one processor;
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor performing the method of any one of the first aspect by executing the instructions stored by the memory.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, including:
the computer readable storage medium stores computer instructions which, when executed on a computer, cause the computer to perform the method of any of the first aspects.
In the embodiment of the invention, the object to be identified is identified according to the characteristic parameters of the object to be identified in the first video frame and the distance from the object to be identified to the corresponding camera, rather than only according to the characteristic parameters of the object to be identified. Therefore, even if the extracted characteristics of the object to be identified are changed, whether the object to be identified is the target object can be further determined according to the distance from the object to be identified to the corresponding camera, so that the target object is tracked, and the target to be tracked is prevented from being lost as much as possible.
Drawings
FIG. 1 is a schematic flow chart of a target authentication method provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an object authentication apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly and completely understood, the technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
In the current target identification method, for the condition that the illumination condition is poor or the condition that the target is shielded, the characteristic parameters of the target object collected by the camera may change, so that pure video technical analysis is performed according to the video shot by the camera, and the determination of the target object from the shot video according to the characteristics of the target obviously causes interruption or error of target tracking, and even causes the target to be tracked to be lost.
In view of this, in the technical solution provided in the embodiment of the present invention, the target object is determined according to the extracted characteristic parameters of the object to be identified and the distance from the object to be identified to the corresponding camera, so that the target object can be better identified by not only identifying according to the characteristic parameters, and the target to be tracked is prevented from being lost as much as possible.
The object identification method provided by the embodiment of the invention can be applied to electronic equipment with computing capability, such as a personal computer, a server and the like, and the embodiment of the invention is not limited as to the type of the electronic equipment. Hereinafter, the object authentication method provided by the embodiment of the invention is uniformly applied to the electronic device.
The technical scheme provided by the embodiment of the invention is described in the following with reference to the drawings in the specification.
Referring to fig. 1, an embodiment of the present invention provides a target identification method, and a flow of the target identification method is described as follows.
S101: extracting at least three video frames from the acquired video stream, and extracting the characteristic parameters of the target object in each video frame to obtain a characteristic parameter set, wherein the at least three video frames are from at least three different cameras.
S102: the method comprises the steps of extracting characteristic parameters of an object to be identified in a first video frame, and determining the distance from the object to be identified to a corresponding camera, wherein the first video frame is any one other than at least three video frames.
S103: and identifying the target object in the first video frame according to the distance, the characteristic parameters and the characteristic parameter set.
According to the embodiment of the invention, a plurality of cameras can be arranged in the area to be monitored, so that each object entering the area to be monitored can be shot through the plurality of cameras. The area to be monitored may be a certain area, such as an office area, or a parking lot area, etc. The plurality of cameras may be distributed according to the actual situation of the area to be monitored, for example, if the area to be monitored is an office area, and the area is relatively small, three cameras or possibly many other cameras may be arranged. For another example, if the area to be monitored is a parking lot area, which is relatively large, six cameras or possibly many other cameras may be arranged. Regardless of how many cameras are arranged, the angle areas shot by adjacent cameras in the multiple cameras need to have overlapping parts, so that the multiple cameras can shoot the same object from different angles at the same time. Therefore, even if the useful characteristics of the object to be identified cannot be obtained from the video frame shot from one angle, namely the obtained characteristics of the object to be identified are not enough to identify the object, the characteristics to be identified can be obtained from the video frame shot from another angle, thereby facilitating the identification of the object to be identified.
In the embodiment of the invention, the target object is tracked, so the target object needs to be identified before being tracked, and the identification of the target object is usually realized by comparing whether the characteristics of the object to be identified are matched with the characteristics of the target object. For example, if the target object is a person, the clothing color, the head feature, and the face feature of the object to be identified and the target object may be compared to determine whether the clothing color, the head feature, and the face feature are consistent. If the target object is a vehicle, the color characteristics, license plate characteristics and the like of the object to be identified and the target object can be compared.
The electronic equipment in the embodiment of the invention can acquire the characteristics of the target object from the video streams acquired by the plurality of cameras arranged in the area to be monitored. In the embodiment of the invention, at least three cameras are arranged in the area to be monitored as an example, when a target object enters the area to be monitored, the at least three cameras can acquire the target object from different angles, and one camera corresponds to one video stream. After the at least three cameras collect the at least three video streams, the collected video streams can be actively sent to the electronic equipment, or after a message requesting to send the video streams sent by the electronic equipment is received, the video streams are sent to the electronic equipment. After the electronic device receives the acquired at least three video streams, one video frame can be extracted from each video stream respectively to obtain at least three video frames, so as to obtain the features of the target object from the obtained at least three video frames.
After the electronic device obtains at least three video frames, the foreground image of each video frame, that is, the image with the background removed, can be obtained respectively, and then the characteristic parameters of the target object are extracted from each obtained foreground image, so as to obtain a characteristic parameter set, that is, all the characteristic parameters of the obtained target object are included. The characteristic parameters may include one or any combination of color and texture, and the type and number of the characteristic parameters are not limited in the embodiments of the present invention.
After the characteristic parameter set of the target object is obtained, whether the object to be identified in at least three video frames acquired by at least three cameras is the target object or not can be judged through the characteristic parameters. The electronic device in the embodiment of the invention can determine the object to be identified in any one of the video frames, also can determine the object to be identified of each video frame in advance, and further determines whether the object to be identified is a target object or not by matching the characteristic parameters of the object to be identified in each video frame with the characteristic parameter set. In the following, how the electronic device to identify whether an object is a target object is described by taking any one of the video frames as a first video frame.
The electronic device in the embodiment of the present invention may extract the feature parameters of the object to be identified in the first video frame captured by any one of the cameras at a certain time, and in the specific implementation, the feature parameters of the target object may be extracted in each video frame in the above manner, which is not described herein again. The electronic equipment matches the extracted characteristic parameters of the object to be identified with the characteristic parameter set, and if the matching is successful, namely the characteristic parameters of the object to be identified are consistent with the characteristic parameters of the target object, the object to be identified can be determined to be the target object.
The characteristics of the objects to be authenticated in the video frames shot by one camera may not be identical. For example, in the case that the scene of the area to be monitored is not changed, the features of the object to be identified in consecutive video frames captured by one camera may be the same, and if the scene of the area to be monitored is changed, for example, the light is dark and there is a blocking object, the features of the object to be identified in consecutive video frames of one camera may be increased or decreased, that is, the features of the object to be identified are changed, and the extracted feature parameters of the object to be identified are also changed. In the case where the characteristic parameter of the object to be authenticated is changed due to a change in the scene of the area to be monitored, the object to be authenticated is actually the target object despite the change in the characteristic parameter of the object to be authenticated. In the prior art, the target object is determined only by the characteristic parameters of the object to be identified, which is obviously inaccurate.
Therefore, when it is determined that the feature parameter of the object to be authenticated is not matched with the feature parameter set, the electronic device in the embodiment of the present invention may determine that the object to be authenticated may be in a situation with a poor illumination condition or be blocked, at this time, the electronic device may further determine a distance from the object to be authenticated to the corresponding camera, and further determine whether the object to be authenticated is the target object according to the distance. The sequence of extracting the characteristic parameters of the object to be identified in the first video frame and determining the distance from the object to be identified to the corresponding camera by the electronic equipment in the embodiment of the invention is not limited.
In the embodiment of the invention, a laser ranging device can be arranged at the position of each camera, the distance from the object to be identified to the corresponding camera is measured by the laser ranging device, and the measured distance is sent to the electronic equipment. The laser ranging device can record time points of the measured distance and periodically send the measured distance to the electronic equipment, so that the electronic equipment can know the distance from each time point to the corresponding camera by the object to be identified measured by the laser ranging device. Because the vehicle precision of the laser ranging device generally belongs to the centimeter level, if the object to be identified is shielded by the shielding object, the distance from the object to be identified to the corresponding camera, which is measured by the laser ranging device, can be changed greatly. For example, if the object to be authenticated is not blocked by the blocking object, the distance from the object to be authenticated to the corresponding camera measured by the laser ranging device is 10m, and if the object to be authenticated is blocked by the blocking object, the distance from the object to be authenticated to the corresponding camera measured by the laser ranging device may be 2 m.
In general, the moving speed of the target object is substantially constant, and therefore, if the target object is in a state where the lighting condition is not good, the target object moves to a predetermined position within a preset time period. Therefore, even if the characteristic parameter of the object to be identified changes, if the distance from the object to be identified to the corresponding camera is within the preset distance range, the preset distance range can be considered as a range formed by the distance sent by the laser ranging device at each time point received by the electronic equipment and the distance preset by the moving speed of the object to be identified, that is, the possible distance from the target object to the corresponding camera under normal conditions. The electronic device can determine that the object to be identified is still the target object under the condition that the illumination condition is not good, and can continuously identify the object to be identified in the video frame shot by the camera at the next moment so as to realize the tracking of the target object. Therefore, the tracking interruption of the target object and even the loss of the target object to be tracked can not be caused because the characteristic parameters of the object to be identified are not matched with the characteristic parameter set of the target object.
In this case, it can be clearly understood that even if the feature parameter of the object to be authenticated does not match the feature parameter set, the object to be authenticated is substantially the target object. Therefore, the electronic device can add the characteristic parameters of the object to be identified to the characteristic parameter set, so that the original characteristic parameters are reserved, the characteristic parameters of the target object under the condition of poor illumination are also reserved, a new characteristic parameter set of the target object is formed and is used for continuously tracking the target object in the subsequent process, and in the subsequent tracking process, even if the condition of poor illumination condition exists, whether the object to be identified is the target object can be determined by comparing whether the characteristic parameters of the object to be identified are matched with the characteristic parameter set, the determination is carried out without further combination of distance, and the burden of electronic equipment is reduced.
On the contrary, if the distance measured by the laser ranging device is different from the actual predicted distance and is not within the preset distance range, it can be determined that the object to be identified is blocked by the blocking object, and it is not necessary to continue tracking the object to be identified. At this time, the object to be identified can be identified continuously in the video frames shot by other cameras, and because other cameras are the object to be identified shot from other angles, the object to be identified in the video frames shot by other cameras is not necessarily shielded by the shielding object, thereby realizing the continuous tracking of the target object.
After the electronic equipment in the embodiment of the invention determines that the object to be identified is the target object, the motion track of the target object can be further determined, and the target object can be effectively tracked. In a possible implementation manner, the electronic device may determine the motion trajectory of the target object through the activity information of the target object and the position coordinates of the at least three different cameras. The activity information of the target object includes information such as time, speed, and moving direction of the target object passing through a certain location.
The certain location may be a location where the camera is installed. The speed of the target object may be a normal moving speed of the target object in a general case, for example, the speed of a pedestrian is generally 1 m/s. The moving direction may be determined according to the video photographed by the camera. For example, the area to be monitored is an office area, four cameras in four relative directions are arranged in the office area, and the four cameras can shoot the target object from different angles. When the target object passes through the area to be monitored, the four cameras shoot the target object from four different directions, and the electronic equipment can determine the position coordinates of the target object according to the moving speed and/or moving direction of the target object and the position coordinates of the four cameras according to the moving information of the target object passing through the four cameras, such as the distance and time from the target object to the four cameras respectively. And sequencing the position coordinates of the target object at each moment point according to the sequence of the moment points, so that the position coordinate sequence of the target object is the running track of the target object.
According to the target identification method provided by the embodiment of the invention, when the object to be identified is identified, whether the object to be identified is the target object is determined according to the characteristics of the object to be identified and the distance from the object to be identified to the corresponding camera, and the identification is not only carried out according to the characteristic parameters of the object to be identified. Therefore, even if the extracted characteristics of the object to be identified are changed, for example, under the condition of poor illumination conditions, whether the object to be identified is the target object can be determined, and the target to be tracked is prevented from being lost as much as possible. Meanwhile, the adaptability to illumination conditions when the object to be identified is improved.
The following describes the apparatus provided by the embodiment of the present invention with reference to the drawings.
Referring to fig. 2, based on the same inventive concept, an embodiment of the present invention provides an object authentication apparatus, which includes an extraction unit 201, a determination unit 202, and an authentication unit 203. The extracting unit 201 may be configured to extract at least three video frames from the captured video stream, and extract feature parameters of the target object in each video frame to obtain a feature parameter set, where the at least three video frames are from at least three different cameras. The determining unit 202 may be configured to extract feature parameters of an object to be authenticated in a first video frame, and determine a distance from the object to be authenticated to a corresponding camera, where the first video frame is any one other video frame except the at least three video frames. The identifying unit 203 may be configured to identify the target object in the first video frame based on the distance, the feature parameter, and the set of feature parameters.
In a possible implementation manner, the extraction unit 201 is specifically configured to:
respectively acquiring a foreground image of each video frame;
and respectively extracting characteristic parameters of the target object from each acquired foreground image, wherein the characteristic parameters at least comprise one or any combination of colors and textures.
In a possible implementation, the authentication unit 203 is specifically configured to:
judging whether the characteristic parameters of the object to be identified are matched with the characteristic parameter set;
if so, determining that the object to be identified is the target object;
otherwise, when the distance is judged to be within the preset distance range, the identification object is determined to be the target object, and unmatched characteristic parameters are added to the characteristic parameter set.
In a possible implementation, the authentication unit 203 may further be configured to:
and continuously identifying the target object in other video frames except the first video frame.
In one possible implementation, the determining unit 202 may further be configured to:
determining the motion trail of the target object according to the activity information of the target object and the position coordinates of at least three different cameras;
the activity information includes time, speed and moving direction information of the target object passing a certain position.
The device may be configured to execute the method provided in the embodiment shown in fig. 1, and therefore, for functions and the like that can be realized by each functional module of the device, reference may be made to the description of the embodiment shown in fig. 1, which is not described in detail.
Referring to fig. 3, an embodiment of the present invention further provides a computer device, which includes at least one processor 301 and a memory 302 connected to the at least one processor 301. Wherein the memory 302 stores instructions executable by the at least one processor 301, and the at least one processor 301 executes the instructions stored in the memory 302 to perform the method shown in fig. 1.
In a Specific implementation, each processor 301 may be specifically a central processing unit, an Application Specific Integrated Circuit (ASIC), one or more Integrated circuits for controlling program execution, a hardware Circuit developed by using a Field Programmable Gate Array (FPGA), or a baseband processor.
The Memory 302 may include a Read Only Memory (ROM), a Random Access Memory (RAM), and a disk Memory, and is used for storing data required by the processor 301 during operation. The number of the memories 302 is one or more. The memory 302 is also shown in fig. 3, but it should be understood that the memory 302 is not an optional functional module, and is therefore shown in fig. 3 by a dotted line.
Based on the same inventive concept, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the method shown in fig. 1.
In particular implementations, the computer-readable storage medium includes: various storage media capable of storing program codes, such as a Universal Serial Bus flash drive (USB), a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a Universal Serial Bus flash disk (usb flash disk), a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A method of identifying an object, comprising:
extracting at least three video frames from the acquired video stream, and extracting the characteristic parameters of the target object in each video frame to obtain a characteristic parameter set; wherein the at least three video frames are from at least three different cameras;
extracting characteristic parameters of an object to be identified in a first video frame, and determining the distance from the object to be identified to a corresponding camera; the first video frame is any one other video frame except the at least three video frames;
identifying the target object in the first video frame according to the distance, the characteristic parameter and the characteristic parameter set;
wherein identifying the target object in the first video frame according to the distance, the feature parameter, and the set of feature parameters comprises:
judging whether the characteristic parameters of the object to be identified are matched with the characteristic parameter set;
if not, when the distance is judged to be within the preset distance range, the identified object is determined to be the target object, and unmatched characteristic parameters are added to the characteristic parameter set.
2. The method of claim 1, wherein extracting at least three video frames from the captured video stream and extracting feature parameters of the target object in each video frame comprises:
respectively acquiring a foreground image of each video frame;
and respectively extracting characteristic parameters of the target object from each acquired foreground image, wherein the characteristic parameters at least comprise one or any combination of colors and textures.
3. The method of claim 1, further comprising:
and continuously identifying the target object in other video frames except the first video frame.
4. The method of any of claims 1-3, further comprising:
determining the motion track of the target object according to the activity information of the target object and the position coordinates of the at least three different cameras;
wherein the activity information includes time, speed and moving direction information of the target object passing a certain location.
5. An object authentication apparatus, comprising:
the extraction unit is used for extracting at least three video frames from the acquired video stream, and extracting the characteristic parameters of the target object in each video frame to obtain a characteristic parameter set; wherein the at least three video frames are from at least three different cameras;
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for extracting characteristic parameters of an object to be identified in a first video frame and determining the distance from the object to be identified to a corresponding camera; the first video frame is any one other video frame except the at least three video frames;
the identification unit is used for identifying the target object in the first video frame according to the distance, the characteristic parameters and the characteristic parameter set;
the authentication unit is specifically configured to:
judging whether the characteristic parameters of the object to be identified are matched with the characteristic parameter set;
if not, when the distance is judged to be within the preset distance range, the identified object is determined to be the target object, and unmatched characteristic parameters are added to the characteristic parameter set.
6. The apparatus of claim 5, wherein the authentication unit is further to:
and continuously identifying the target object in other video frames except the first video frame.
7. A computer device, comprising:
at least one processor;
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor performing the method of any one of claims 1-4 by executing the instructions stored by the memory.
8. A computer-readable storage medium characterized by:
the computer readable storage medium stores computer instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-4.
CN201710794884.6A 2017-09-06 2017-09-06 Target identification method and device Active CN109472809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710794884.6A CN109472809B (en) 2017-09-06 2017-09-06 Target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710794884.6A CN109472809B (en) 2017-09-06 2017-09-06 Target identification method and device

Publications (2)

Publication Number Publication Date
CN109472809A CN109472809A (en) 2019-03-15
CN109472809B true CN109472809B (en) 2020-09-25

Family

ID=65658320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710794884.6A Active CN109472809B (en) 2017-09-06 2017-09-06 Target identification method and device

Country Status (1)

Country Link
CN (1) CN109472809B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001273503A (en) * 2000-03-23 2001-10-05 Eiji Kawamura Motion recognition system
CN102360423A (en) * 2011-10-19 2012-02-22 丁泉龙 Intelligent human body tracking method
CN103295016A (en) * 2013-06-26 2013-09-11 天津理工大学 Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN103679742A (en) * 2012-09-06 2014-03-26 株式会社理光 Method and device for tracking objects
CN103888731A (en) * 2014-03-24 2014-06-25 公安部第三研究所 Structured description device and system for mixed video monitoring by means of gun-type camera and dome camera
CN104978558A (en) * 2014-04-11 2015-10-14 北京数码视讯科技股份有限公司 Target identification method and device
CN105320928A (en) * 2014-06-30 2016-02-10 本田技研工业株式会社 Object recognition apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8948447B2 (en) * 2011-07-12 2015-02-03 Lucasfilm Entertainment Companyy, Ltd. Scale independent tracking pattern

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001273503A (en) * 2000-03-23 2001-10-05 Eiji Kawamura Motion recognition system
CN102360423A (en) * 2011-10-19 2012-02-22 丁泉龙 Intelligent human body tracking method
CN103679742A (en) * 2012-09-06 2014-03-26 株式会社理光 Method and device for tracking objects
CN103295016A (en) * 2013-06-26 2013-09-11 天津理工大学 Behavior recognition method based on depth and RGB information and multi-scale and multidirectional rank and level characteristics
CN103888731A (en) * 2014-03-24 2014-06-25 公安部第三研究所 Structured description device and system for mixed video monitoring by means of gun-type camera and dome camera
CN104978558A (en) * 2014-04-11 2015-10-14 北京数码视讯科技股份有限公司 Target identification method and device
CN105320928A (en) * 2014-06-30 2016-02-10 本田技研工业株式会社 Object recognition apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于摄像机运动控制的运动目标检测与跟踪算法研究;张思民;《福建电脑》;20131231;全文 *

Also Published As

Publication number Publication date
CN109472809A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
Fernandez-Sanjurjo et al. Real-time visual detection and tracking system for traffic monitoring
CN110443210B (en) Pedestrian tracking method and device and terminal
CN108447091B (en) Target positioning method and device, electronic equipment and storage medium
CN110910655A (en) Parking management method, device and equipment
CN108694399B (en) License plate recognition method, device and system
CN112465866B (en) Multi-target track acquisition method, device, system and storage medium
CN105979143B (en) Method and device for adjusting shooting parameters of dome camera
WO2018135922A1 (en) Method and system for tracking object of interest in real-time in multi-camera environment
CN111079621B (en) Method, device, electronic equipment and storage medium for detecting object
US11023717B2 (en) Method, apparatus, device and system for processing commodity identification and storage medium
CN110867083B (en) Vehicle monitoring method, device, server and machine-readable storage medium
CN112150514A (en) Pedestrian trajectory tracking method, device and equipment of video and storage medium
CN108830204B (en) Method for detecting abnormality in target-oriented surveillance video
CN108040244B (en) Snapshot method and device based on light field video stream and storage medium
CN109685062B (en) Target detection method, device, equipment and storage medium
CN114155488A (en) Method and device for acquiring passenger flow data, electronic equipment and storage medium
CN113869258A (en) Traffic incident detection method and device, electronic equipment and readable storage medium
CN109472809B (en) Target identification method and device
CN106683113B (en) Feature point tracking method and device
CN112435479B (en) Target object violation detection method and device, computer equipment and system
CN109740518B (en) Method and device for determining object in video
CN113962338A (en) Indoor monitoring method and system for RFID-assisted multi-camera detection and tracking
CN111639640A (en) License plate recognition method, device and equipment based on artificial intelligence
CN112597924A (en) Electric bicycle track tracking method, camera device and server
CN114038197B (en) Scene state determining method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 19th floor, No.29, Financial Street, Xicheng District, Beijing 100032

Patentee after: CHINA MOBILE COMMUNICATIONS GROUP Co.,Ltd.

Patentee after: CHINA MOBILE COMMUNICATIONS Corp.

Address before: 19th floor, No.29, Financial Street, Xicheng District, Beijing 100032

Patentee before: CHINA MOBILE COMMUNICATION LTD., Research Institute

Patentee before: CHINA MOBILE COMMUNICATIONS Corp.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 19th floor, No.29, Financial Street, Xicheng District, Beijing 100032

Patentee after: CHINA MOBILE COMMUNICATION LTD., Research Institute

Patentee after: China Mobile Communications Group Co., Ltd

Address before: 19th floor, No.29, Financial Street, Xicheng District, Beijing 100032

Patentee before: CHINA MOBILE COMMUNICATION LTD., Research Institute

Patentee before: China Mobile Communications Corporation