CN114255409A - Man-vehicle information association method, device, equipment and storage medium - Google Patents

Man-vehicle information association method, device, equipment and storage medium Download PDF

Info

Publication number
CN114255409A
CN114255409A CN202011009511.1A CN202011009511A CN114255409A CN 114255409 A CN114255409 A CN 114255409A CN 202011009511 A CN202011009511 A CN 202011009511A CN 114255409 A CN114255409 A CN 114255409A
Authority
CN
China
Prior art keywords
vehicle
coordinate value
identification frame
vertex coordinate
personnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011009511.1A
Other languages
Chinese (zh)
Inventor
穆菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202011009511.1A priority Critical patent/CN114255409A/en
Priority to PCT/CN2021/118538 priority patent/WO2022063002A1/en
Publication of CN114255409A publication Critical patent/CN114255409A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application relates to the field of artificial intelligence image recognition, and discloses a man-vehicle information association method, a man-vehicle information association device, man-vehicle information association equipment and a storage medium. In the method, the characteristic extraction is carried out on the vehicle identification frame and the personnel identification frame marked from the video to be processed, so that the interference of background factors is effectively avoided, and the accuracy of the extracted vehicle information and personnel information is ensured; whether the corresponding vehicle and the corresponding personnel are associated or not is judged according to the calibrated vehicle identification frame and the calibrated personnel identification frame, and when the association between the vehicle identification frame and the calibrated personnel identification frame is determined, the isolated vehicle information and the isolated personnel information are associated, so that the association between the personnel and the vehicle information is realized, and the time consumption for searching and the waste of manpower and material resources are greatly reduced in the subsequent application scene of searching for the personnel by using the vehicle or the vehicle by using the personnel.

Description

Man-vehicle information association method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of artificial intelligence image recognition, in particular to a method, a device, equipment and a storage medium for associating human-vehicle information.
Background
With the development of artificial intelligence technology, intelligent video security systems are rapidly developing. At present, intelligent video security systems have become important contents for national information construction, national (community) security construction and the like. Meanwhile, with the development of machine learning technology, recognition technologies for vehicles, license plates, pedestrians, human faces and the like based on deep learning are gradually mature at present. The technologies provide very convenient support for the intelligent security work, and the intelligent security work also obtains great success and achieves excellent success.
However, in the current smart security system, vehicle information about a vehicle and person information about a user object are two completely independent parts, i.e., no association is established between the vehicle and the person. Therefore, it is necessary for the relevant worker to spend much time and effort to identify the vehicle and the person associated with the vehicle from the mass video data.
Therefore, how to associate the vehicle with the personnel to reduce the time consumption for searching and the waste of manpower and material resources is in urgent need of solving the problem.
Disclosure of Invention
An embodiment of the present application aims to provide a method, an apparatus, a device and a storage medium for associating people and vehicles information, and aims to solve the above technical problems.
In order to solve the technical problem, an embodiment of the present application provides a method for associating people and vehicles information, including:
acquiring a video to be processed;
identifying vehicles and personnel included in the video to be processed to obtain a vehicle identification frame and a personnel identification frame;
respectively extracting the characteristics of the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information;
judging whether the vehicle is associated with the person or not according to the vehicle identification frame and the person identification frame;
and if the association exists, associating the vehicle information with the personnel information.
In order to achieve the above object, an embodiment of the present application further provides a human-vehicle information association apparatus, including:
the acquisition module is used for acquiring a video to be processed;
the identification module is used for identifying vehicles and personnel contained in the video to be processed to obtain a vehicle identification frame and a personnel identification frame;
the extraction module is used for respectively extracting the characteristics of the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information;
the judging module is used for judging whether the vehicle is associated with the person or not according to the vehicle identification frame and the person identification frame;
and the association module is used for associating the vehicle information with the personnel information when the vehicle and the personnel are associated.
In order to achieve the above object, an embodiment of the present application further provides a human-vehicle information association device, including:
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the human-vehicle information correlation method as described above.
In order to achieve the above object, an embodiment of the present application further provides a computer-readable storage medium storing a computer program. The computer program realizes the human-vehicle information correlation method when being executed by a processor.
According to the method, the device, the equipment and the storage medium for associating the human-vehicle information, the characteristics of the vehicle identification frame and the personnel identification frame marked out from the video to be processed are extracted, so that the interference of background factors is effectively avoided, and the accuracy of the extracted vehicle information and the extracted personnel information is ensured; whether the corresponding vehicle and the corresponding personnel are associated or not is judged according to the calibrated vehicle identification frame and the calibrated personnel identification frame, and when the association between the vehicle identification frame and the calibrated personnel identification frame is determined, the isolated vehicle information and the isolated personnel information are associated, so that the association between the personnel and the vehicle information is realized, and the time consumption for searching and the waste of manpower and material resources are greatly reduced in the subsequent application scene of searching for the personnel by using the vehicle or the vehicle by using the personnel.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a flowchart of a human-vehicle information association method according to a first embodiment of the present application;
fig. 2 is a schematic diagram illustrating a method for associating personal vehicle information provided in a second embodiment of the present application to determine a positional relationship between a vehicle identification frame and a person identification frame;
fig. 3 is a further schematic diagram of determining a position relationship between a vehicle identification frame and a person identification frame in a person-vehicle information association method according to a second embodiment of the present application;
fig. 4 is a further schematic diagram of determining a position relationship between a vehicle identification frame and a person identification frame in a person-vehicle information association method according to a second embodiment of the present application;
fig. 5 is a further schematic diagram of determining a position relationship between a vehicle identification frame and a person identification frame in a person-vehicle information association method according to a second embodiment of the present application;
fig. 6 is a schematic diagram illustrating a method for associating personal and vehicular information according to a third embodiment of the present application to determine whether there is an association between a vehicle and a person;
fig. 7 is a schematic diagram illustrating a method for associating personal and vehicular information according to a fourth embodiment of the present application to determine whether there is an association between a vehicle and a person;
fig. 8 is a schematic structural diagram of a human-vehicle information correlation device according to a fifth embodiment of the present application;
fig. 9 is a schematic structural diagram of a human-vehicle information correlation device according to a sixth embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that in the examples of the present application, numerous technical details are set forth in order to provide a better understanding of the present application. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present application, and the embodiments may be mutually incorporated and referred to without contradiction.
The first embodiment relates to a people-vehicle information correlation method, which effectively avoids the interference of background factors by extracting the characteristics of a vehicle identification frame and a personnel identification frame marked from a video to be processed, thereby ensuring the accuracy of the extracted vehicle information and personnel information; whether the corresponding vehicle and the corresponding personnel are associated or not is judged according to the calibrated vehicle identification frame and the calibrated personnel identification frame, and when the association between the vehicle identification frame and the calibrated personnel identification frame is determined, the isolated vehicle information and the isolated personnel information are associated, so that the association between the personnel and the vehicle information is realized, and the time consumption for searching and the waste of manpower and material resources are greatly reduced in the subsequent application scene of searching for the personnel by using the vehicle or the vehicle by using the personnel.
The following describes implementation details of the man-vehicle information association method of the present embodiment, and the following description is provided only for easy understanding and is not essential to implementing the present embodiment.
The method for associating the personal information with the vehicle information is specifically applied to any terminal device capable of executing the method, such as a personal computer, a tablet computer, a smart phone, and the like, and is not illustrated one by one, and the embodiment is not limited thereto.
The specific flow of this embodiment is shown in fig. 1, and specifically includes the following steps:
step 101, obtaining a video to be processed.
Specifically, the video to be processed may be from monitoring cameras set in different places, or may be from various big data platforms, which is not limited in this embodiment.
In addition, the video to be processed may be in various formats and in various forms, and this embodiment also does not limit this.
And 102, identifying the vehicle and the personnel in the video to be processed to obtain a vehicle identification frame and a personnel identification frame.
It should be understood that, in actual operation, for extracting the features of the target object, the target object is generally determined, then the target object is calibrated, and then the position of the target object in the video to be processed is determined to obtain a target identification frame of the target object, and then the feature of the target object in the target identification frame is extracted by using a preset identification technology.
Therefore, in order to realize the human-vehicle information association, a preset identification technology needs to be adopted, and in this embodiment, a vehicle and a person included in a video to be processed are identified through analysis processing of the video to be processed based on an artificial intelligence video identification technology, and finally a corresponding vehicle identification frame and a corresponding person identification frame are calibrated according to the identified vehicle and person.
As for the above-mentioned vehicles, the present embodiment is not limited to only motor vehicles, but other vehicles with obvious features, such as electric vehicles, three-wheeled vehicles, rickshaws, etc., belong to the vehicles that need to be identified from the video to be processed.
Accordingly, the people include not only the driver and the passenger who drive or ride the vehicle, but also pedestrians, i.e., people who appear in the video to be processed belong to the vehicle to be identified from the video to be processed.
For convenience of understanding, the embodiment provides an implementation manner of identifying a vehicle and a person from a video to be processed, and further marking a vehicle identification frame corresponding to the vehicle, where the person identification frame corresponding to the person is as follows:
(1) and extracting a video frame image from the video to be processed, namely taking a frame as a unit, and taking an image corresponding to each frame as a video frame image, or taking an image corresponding to some frames as a video frame image.
Specifically, in practical applications, when the video pictures to be processed have small changes, that is, tens of frames, or even more frames have the same corresponding pictures, the frames, that is, the frames corresponding to the same picture, can be selectively combined into one frame to obtain a video frame image.
Accordingly, when the video frame to be processed has a large change, that is, each frame or each several frames will correspond to a different frame, a frame can be selected to correspond to a video frame image.
(2) And carrying out vehicle detection on the video frame image, and when a vehicle is detected, marking all vehicles appearing in the video frame image to obtain N vehicle identification frames.
(3) And carrying out human face shape detection on the video frame image, and when people are detected, marking all people appearing in the video frame image to obtain an M person identification frame.
It should be understood that, since a video to be processed can usually extract a plurality of video frame images, in a specific application, it is necessary to perform the above-mentioned vehicle detection and human face and human shape detection on each video frame image.
In order to facilitate implementation, a preset machine learning algorithm, such as a deep convolutional neural network algorithm, may be adopted to train the vehicle sample data and the human face and human shape sample data, respectively, and further obtain a vehicle recognition model for recognizing a vehicle and a human face and human shape recognition model for recognizing a person.
In addition, it should be understood that, due to the image recognition technology and the relative popularization, detailed description of the specific recognition method is omitted in this embodiment, and a person skilled in the art can select a suitable recognition technology as needed to recognize the video to be processed, so as to recognize the vehicle and the person included in the video to be processed, and further determine the corresponding vehicle recognition frame and the person recognition frame.
In addition, N and M are integers greater than 0, and in practical applications, the values of N and M may be the same or different.
And 103, respectively extracting the features of the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information.
Specifically, in the present embodiment, the vehicle information extracted from the vehicle identification frame mainly includes a license plate number, a vehicle type, a vehicle color, a vehicle brand, vehicle accessories (such as a vehicle body, interior decoration, and the like), a vehicle traveling direction, a vehicle appearance time, and the like.
Correspondingly, the personnel information extracted from the personnel identification frame mainly comprises face feature information, clothing color, clothing style, hair color, body attachments (such as watches, backpacks, eyes and the like), personnel traveling direction, personnel presence time and the like.
Furthermore, as can be seen from the description of step 102, there may be a plurality of vehicle identification frames and person identification frames identified from the video to be processed, so in order to determine the relevance between each vehicle and each person in the video to be processed, the identification frames corresponding to the vehicles and the persons may be combined first, that is, N vehicle identification frames and M person identification frames are combined, and then N × M person-vehicle combinations are obtained.
After obtaining the nxm human-vehicle combinations, the nxm human-vehicle combinations need to be traversed, that is, the feature extraction operation in step 103 needs to be performed on each traversed human-vehicle combination.
And 104, judging whether the vehicle is associated with the person or not according to the vehicle identification frame and the person identification frame.
Specifically, if it is determined that the vehicle and the person are associated through the judgment, the step 105 is performed, otherwise, the step is directly ended, or a prompt is made, for example, the currently judged vehicle and person in the human-vehicle combination are not associated.
In this embodiment, when determining whether a vehicle and a person in the current human-vehicle combination are related according to the vehicle identification frame and the person identification frame extracted from each human-vehicle combination, specifically, coordinate information of the vehicle identification frame and coordinate information of the person identification frame are obtained, so as to obtain vehicle coordinate information and person coordinate information; and finally, judging whether the vehicle is associated with the personnel according to the vehicle coordinate information and the personnel coordinate information.
The vehicle coordinate information may be coordinate information of four vertices of the vehicle identification frame, specifically coordinate values of corresponding vertices, that is, a value corresponding to the X axis and a value corresponding to the Y axis, a vertex coordinate value of an upper left corner and a vertex coordinate value of a lower right corner, or a vertex coordinate value of an upper right corner and a vertex coordinate value of a lower left corner.
Correspondingly, the personnel coordinate information may be coordinate information of four vertexes of any recognition frame, or may be a vertex coordinate value of an upper left corner and a vertex coordinate value of a lower right corner, or may be a vertex coordinate value of an upper right corner and a vertex coordinate value of a lower left corner.
In addition, it is worth mentioning that, in order to determine whether the vehicle and the person are associated according to the vehicle coordinate information and the person coordinate information, in a specific application, if the vehicle coordinate information and the person coordinate information both include only two vertex coordinate values, it is required to ensure that the vertex coordinate values included in the vehicle coordinate information and the person coordinate information are vertex coordinate values of corresponding vertices, that is, if the vehicle coordinate information is a vertex coordinate value of an upper left corner and a vertex coordinate value of a lower right corner, the person coordinate information also needs to be a vertex coordinate value of an upper left corner and a vertex coordinate value of a lower right corner. On the contrary, if the vehicle coordinate information is the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner, the personnel coordinate information also needs to be the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner.
For convenience of explanation, the vehicle coordinate information in this embodiment at least includes a vertex coordinate value L of the upper left corner of the vehicle identification frame1And the vertex coordinate value R of the lower right corner1The personnel coordinate information at least comprises a vertex coordinate value L of the upper left corner of the personnel identification frame2And the vertex coordinate value R of the lower right corner2The description is given for the sake of example:
specifically, first, the vertex coordinate value L is set1And the vertex coordinate value L2Comparing the vertex coordinate value R1And the vertex coordinate value R2Comparing; then, based on the comparison result, it is confirmedDetermining the position relation between the vehicle identification frame and the personnel identification frame; and finally, judging whether the vehicle is associated with the person or not according to the position relation.
That is, whether the vehicle and the person are related or not is determined based on the positional relationship between the vehicle identification frame in which the vehicle is located and the person identification frame in which the person is located.
For example, when the positional relationship between the vehicle identification frame and the person identification frame is an inclusion relationship, it can be considered that the vehicle in the vehicle identification frame and the person in the person identification frame are associated with each other.
For example, when the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, that is, not inclusive, but partially overlapping, it can be considered that the vehicle in the vehicle identification frame and the person in the person identification frame are associated with each other.
For example, the position relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship, that is, the two frames are neither inclusive nor overlapping, and it can be considered that there is no relationship between the vehicle in the vehicle identification frame and the person in the person identification frame.
And 105, associating the vehicle information with the personnel information.
Specifically, when the vehicle in the vehicle identification frame is associated with the person in the person identification frame, the operation of associating the vehicle information with the person information in step 105 is to associate the license plate number, the vehicle type, the vehicle color, the vehicle brand, the vehicle accessories (such as the vehicle body and the decoration in the vehicle), the vehicle traveling direction, the vehicle appearing time, and the like in the vehicle information with the facial feature information, the clothing color, the clothing style, the hair color, the body accessories (such as the watch, the backpack, the eyes, and the like), the person traveling direction, the person appearing time, and the like in the person information.
That is, in practical application, as long as any one or any ones of the vehicle information are extracted, all the vehicle information and the personnel information of the personnel with the associated personnel can be searched out, so that the searching and screening time is greatly shortened, and further, in a security scene, such as a scene of searching for a vehicle by a person or searching for a person by a vehicle, the searching time and the waste of manpower and material resources are greatly reduced.
For example, if a certain person, say three, has a black audi Q5 with the license plate number "123456", and if the vehicle appears at the location a when driven by three people at 2 pm on 14 pm on 9/14/2020, the following person-vehicle related information can be obtained by inputting the license plate number "123456", or the identification number of three people, or his biometric information: zhang Sanzhu with identification number XXXX and number 123456 black Audi Q5 appeared at location A at 2 pm on 14 pm on 9/2020.
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not to be taken as the only limitation to the present embodiment.
In practical application, vehicle information and personnel information can be reasonably extracted according to a use scene, and a corresponding system, such as an intelligent security system, is constructed according to needs, so that more convenient service is provided for related workers.
Through the above description, it is not difficult to find that the human-vehicle information correlation method provided by the embodiment performs feature extraction through the vehicle identification frame and the human identification frame marked out from the video to be processed, so that the interference of background factors is effectively avoided, and the accuracy of the extracted vehicle information and the extracted human information is ensured.
In addition, whether the corresponding vehicle and the corresponding personnel are associated or not is judged according to the calibrated vehicle identification frame and the calibrated personnel identification frame, and when the association between the vehicle identification frame and the calibrated personnel identification frame is determined, the isolated vehicle information and the isolated personnel information are associated, so that the association of the personnel and the vehicle information is realized, and the time consumption for searching and the waste of manpower and material resources are greatly reduced in the subsequent application scene of searching for the personnel by using the vehicle or the vehicle by using the personnel.
A second embodiment of the present invention relates to a human-vehicle information associating method. The second embodiment is mainly directed to a specific application scenario of determining the position relationship between the vehicle identification frame and the person identification frame according to the comparison result in the first embodiment, and for convenience of understanding and explanation, the following description is made with reference to fig. 2 to 5.
Specifically, the comparison result for determining the positional relationship between the vehicle recognition frame and the person recognition frame is obtained by comparing the vertex coordinate value L of the vehicle recognition frame1Vertex coordinate value L of person identification frame2Comparing the vertex coordinate values R of the vehicle identification frames1Vertex coordinate value R of person identification frame2Obtained after comparison. Thus, there are several comparisons: (ii) a vertex coordinate value L1Greater than the vertex coordinate value L2Vertex coordinate value R1Greater than the vertex coordinate value R2(ii) a ② vertex coordinate value L1Greater than the vertex coordinate value L2Vertex coordinate value R1Equal to the vertex coordinate value R2(ii) a ③ vertex coordinate value L1Greater than the vertex coordinate value L2Vertex coordinate value R1Less than the vertex coordinate value R2(ii) a Fourthly, the vertex coordinate value L1Equal to the vertex coordinate value L2Vertex coordinate value R1Greater than the vertex coordinate value R2(ii) a Fifthly, the vertex coordinate value L1Equal to the vertex coordinate value L2Vertex coordinate value R1Equal to the vertex coordinate value R2(ii) a Sixthly, the coordinate value L of the vertex1Equal to the vertex coordinate value L2Vertex coordinate value R1Less than the vertex coordinate value R2(ii) a Seventhly, a vertex coordinate value L1Less than the vertex coordinate value L2Vertex coordinate value R1Greater than the vertex coordinate value R2(ii) a (v) vertex coordinate value L1Less than the vertex coordinate value L2Vertex coordinate value R1Equal to the vertex coordinate value R2(ii) a Ninthly vertex coordinate value L1Less than the vertex coordinate value L2Vertex coordinate value R1Less than the vertex coordinate value R2
The above nine comparison results show that the vertex coordinate value L is only the case of1Not greater than the vertex coordinate value L2And the vertex coordinate value R1Not less than the vertex coordinate value R2When the vehicle identification frame is in the vehicle-mounted state, the position relation between the vehicle identification frame and the personnel identification frame is an inclusion relation, and specifically, the vehicle identification frame includes the personnel identification frame.
For convenience of understanding, the present embodiment takes the case (c) as an example, and the positional relationship between the vehicle recognition frame and the person recognition frame is shown in fig. 2.
Correspondingly, the conditions are phi, phi and phi, namely the vertex coordinate value L1Not less than the vertex coordinate value L2And the vertex coordinate value R1Not greater than the vertex coordinate value R2When the vehicle identification frame and the personnel identification frame are in the position relation, the vehicle identification frame is contained in the personnel identification frame.
For easy understanding, the present embodiment takes the case (c) as an example, and the positional relationship between the vehicle identification frame and the person identification frame is shown in fig. 3.
It should be understood that the above-mentioned case, # i.e., the case where the vehicle recognition frame and the person recognition frame completely overlap, is also an inclusion relationship in nature.
Accordingly, in the case of (r) and (ninu), it can be determined that the positional relationship of the vehicle identification frame and the person identification frame is a non-inclusive relationship.
For the operation of determining the positional relationship of the vehicle identification frame and the person identification frame according to the nine cases given above, in practical applications, the following pseudo code may be implemented:
“if L1<=L2and R1>=R2 then
vehicle identification frame and person identification frame containing relation
else if L1>=L2and R1<=R2then vehicle
Vehicle identification frame and person identification frame containing relationship
else
Non-inclusive relationship between vehicle identification box and person identification box "
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not to be taken as the only limitation to the present embodiment.
Furthermore, it is worth mentioning that it is usually the case that the vehicle identification frame is larger than the person identification frame, i.e. the person is located in the vehicle, possibly the driver or possibly the passenger, i.e. the situation given in fig. 2. However, in the actual processing process of the video to be processed, there may be a situation that a person is located outside the vehicle but is too close to the vehicle, for example, the person is tightly attached to the vehicle, and the enclosed person identification frame often encloses the vehicle, so that the person identification frame is larger than the vehicle identification frame, that is, the situation shown in fig. 3 occurs.
Further, after determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, that is, when the comparison result obtained by the comparison is (i) and (ninu) given above, it is possible to further determine whether there is an association relationship between the vehicle and the person appearing in the same image frame by the following two determination logics.
Mode 1: judging the vertex coordinate value L1Whether or not it is not greater than the vertex coordinate value L2And the vertex coordinate value R1Whether or not greater than the vertex coordinate value R2And the vertex coordinate value L2Whether or not it is smaller than the vertex coordinate value R1
Accordingly, if the vertex coordinate value L is1Not greater than the vertex coordinate value L2And the vertex coordinate value R1Not greater than the vertex coordinate value R2And the vertex coordinate value L2Less than the vertex coordinate value R1Determining that the position relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, namely the situation shown in fig. 4; otherwise, namely the vehicle identification frame and the personnel identification frame are completely separated and do not have any overlapping area, and the position relation of the vehicle identification frame and the personnel identification frame is determined to be a non-inclusive and non-overlapping relation.
Mode 2: judging the vertex coordinate value L2Whether or not it is not greater than the vertex coordinate value L1And the vertex coordinate value R2Whether or not greater than the vertex coordinate value R1And the vertex coordinate value L1Whether or not it is smaller than the vertex coordinate value R2
Accordingly, if the vertex coordinate value L is2Not greater than the vertex coordinate value L1And the vertex coordinate value R2Not greater than the vertex coordinate value R1And the vertex coordinate value L1Less than the vertex coordinate value R2Determining that the position relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, namely the situation shown in fig. 5; otherwise, namely the vehicle identification frame and the personnel identification frame are completely separated and do not have any overlapping area, and the position relation of the vehicle identification frame and the personnel identification frame is determined to be a non-inclusive and non-overlapping relation.
In addition, it is worth mentioning that, in order to ensure the accuracy of the determination result as much as possible, before determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, it may be determined whether an overlapping area satisfies a preset overlapping condition.
Correspondingly, if the position relationship of the vehicle identification frame and the personnel identification frame is satisfied, determining that the position relationship of the vehicle identification frame and the personnel identification frame is an overlapping relationship; otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
Regarding the above-mentioned operation of determining whether the overlap area satisfies the preset overlap condition, the logics for the mode 1 and the mode 2 are substantially the same, except that the vertex coordinate values according to which the overlap area and the reference area are determined are different.
Specifically, with respect to the mode 1, the following is specifically implemented:
firstly, according to the vertex coordinate value L2And the vertex coordinate value R1Determining an overlapping area, such as a shaded portion shown in fig. 4; according to the vertex coordinate value L1And the vertex coordinate value R2The reference area is determined, as indicated by the dashed box portion shown in fig. 4.
Then, whether the ratio of the overlapping area to the reference area is larger than a first preset threshold value is judged.
Finally, if the position relation of the vehicle identification frame and the personnel identification frame is larger than the preset value, determining that the position relation of the vehicle identification frame and the personnel identification frame is an overlapping relation; otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
For mode 2, the following is specifically implemented:
firstly, according to the vertex coordinate value R2And the vertex coordinate value L1Determining an overlapping area, such as a shaded portion shown in fig. 5; according to the vertex coordinate value L2And the vertex coordinate value R1The reference area is determined, as indicated by the dashed box portion shown in fig. 5.
Then, whether the ratio of the overlapping area to the reference area is larger than a first preset threshold value is judged.
Finally, if the position relation of the vehicle identification frame and the personnel identification frame is larger than the preset value, determining that the position relation of the vehicle identification frame and the personnel identification frame is an overlapping relation; otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
The first preset threshold mentioned above can be set by those skilled in the art according to actual needs, for example, it is set to 20%.
For the operation of further judging whether the position relationship between the vehicle identification frame and the person identification frame is the overlapping relationship according to the overlapping area, in practical application, the following pseudo code is used for realizing the operation:
overlap relation of if vehicle identification frame and person identification frame
if S(L2,R1)>S(L1,R2)*20%or
S(R2,L1)>S(R1,L2)*20%
The vehicle identification frame and the personnel identification frame satisfy the overlapping relationship
else
Vehicle identification frame and person identification frame not satisfying overlapping relationship "
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not to be taken as the only limitation to the present embodiment.
In addition, in practical application, in order to ensure the accuracy of the judgment result as much as possible, for each judgment, the auxiliary judgment can be performed by means of the video frame images corresponding to the adjacent frames, so as to eliminate interference and further ensure the accuracy of the finally established human-vehicle correlation result.
In practical applications, when the calculation process is performed based on the coordinate values of the vertices, the calculation is specifically performed based on the coordinate values of the vertices on the X axis and the Y axis.
Therefore, the position relationship between the vehicle identification frame and the personnel identification frame is determined according to the mode, so that the people-vehicle information correlation method provided by the embodiment can be suitable for scenes that people and vehicles are in the same frame (people are in the vehicle) and people and vehicles are in different frames but are drawn in the same picture, and further can better analyze the correlation relationship between isolated and scattered target information (vehicle information and personnel information), for example, although a user is not the owner of the vehicle, the user often appears in the same place with the vehicle at the same time, when the vehicle needs to be tracked later, the vehicle can be tracked as long as the travel track of the user is obtained, so that more, deeper and more valuable clue information is provided for related workers, and the practical value of the vehicle information and the personnel information is improved.
A third embodiment of the present invention relates to a human-vehicle information associating method. The third embodiment is mainly directed to a scenario before determining that the vehicle identification frame and the person identification frame are in a non-inclusive and non-overlapping relationship, that is, a certain distance exists between the vehicle identification frame and the person identification frame, and determining that no association exists between the vehicle and the person, as described in the second embodiment, that is, the third embodiment is to further screen out vehicles and persons having an association from non-associated vehicles and persons, and for convenience of understanding and explanation, the following description is made with reference to fig. 6.
As shown in FIG. 6, the vertex coordinate value L may be used before determining that the vehicle and the person are not associated with each other1And the vertex coordinate value R1Determining the coordinate value C of the center point of the vehicle identification frame1(ii) a According to the vertex coordinate value L2And the vertex coordinate value R2Determining coordinate value C of center point of the personnel identification frame2(ii) a Then, according to the centerPoint coordinate value C1And the center point coordinate value C2Determining the distance D from the center point of the vehicle identification frame to the center point of the personnel identification frame1(ii) a Then, a diagonal distance D of the video frame image is determined2(ii) a Then, the distance D is judged1At a distance D from the diagonal2Whether the ratio of (a) to (b) is greater than a second preset threshold; finally, if the vehicle number is larger than the preset number, determining that the vehicle is associated with the personnel; otherwise, determining that the vehicle and the person are not associated.
The diagonal distance D mentioned above2In practical application, the vertex coordinate value of the top left corner and the vertex coordinate value of the bottom right corner of the video frame image may be determined, or the vertex coordinate value of the top right corner and the vertex coordinate value of the bottom left corner of the video frame image may be determined, which is not limited in this embodiment.
In addition, in practical applications, the distance D1Except that the coordinate value of the center point C may be based on the vehicle recognition frame1And the coordinate value C of the center point of the personnel identification frame2The determination may also be performed according to coordinate values of coordinate points on or in the vehicle identification frame and on or in the person identification frame, which is not limited in this embodiment.
In addition, the second preset threshold mentioned above can be set by those skilled in the art according to actual needs, for example, it is set to 30%.
According to the distance D1Distance D from diagonal2Further judging whether the operation related to the person in the vehicle identification frame and the person identification frame exists or not can be realized by the following pseudo codes in practical application:
“if D1>D2*30%
there may be an association between the vehicle and the person
else
The vehicle and the person may not be associated "
It should be understood that the above examples are only examples for better understanding of the technical solution of the present embodiment, and are not to be taken as the only limitation to the present embodiment.
Similarly, in practical application, in order to ensure the accuracy of the judgment result as much as possible, for each judgment, the auxiliary judgment can be performed by using the video frame images corresponding to the adjacent frames to eliminate interference, so as to ensure the accuracy of the finally established human-vehicle correlation result.
Therefore, before the vehicle identification frame and the personnel identification frame are determined to be in a non-contained and non-overlapped relation, namely a certain distance exists between the vehicle identification frame and the personnel identification frame, and before the vehicle and the personnel are determined to be not associated, whether the vehicle in the vehicle identification frame is associated with the personnel in the personnel identification frame is judged by means of the distance, so that the method for associating the information of the personnel and the vehicle provided by the embodiment can be better suitable for scenes in which the vehicles and the vehicles are different in frame but are drawn at the same time, and further, the association relation between the isolated and scattered target information (the vehicle information and the personnel information) can be better analyzed.
A fourth embodiment of the present invention relates to a human-vehicle information associating method. The fourth embodiment is directed to the third embodiment mainly according to the distance D1Distance D from diagonal2In a scenario where it is determined that there is no relationship between a vehicle and a person, that is, a person and a vehicle are on the same screen but do not satisfy the distance determination, that is, in order to further screen out vehicles and persons having a relationship from non-related vehicles and persons, the fourth embodiment is described below with reference to fig. 7 for ease of understanding and explanation.
In particular, according to the distance D1Distance D from diagonal2Before determining that there is no association between the vehicle and the person, the following operations may be performed:
(1) and expanding the vehicle identification frame and the personnel identification frame to obtain a background identification frame.
Specifically, the operation of expanding the vehicle identification frame and the person identification frame is to expand the vehicle identification frame and the person identification frame to a preset size.
In addition, in practical application, the background identification frame may include a vehicle identification frame and a person identification frame; the vehicle identification frame and the personnel identification frame can be respectively enlarged, and then the background identification frame corresponding to the vehicle identification frame and the background identification frame corresponding to the personnel identification frame are obtained.
For convenience of processing, the first mode is adopted in the present embodiment, that is, the background recognition frame includes a vehicle recognition frame and a person recognition frame.
(2) And extracting features of the background recognition frame, and determining the location information of the vehicle and the person according to the extracted features.
Regarding the feature extraction operation performed on the background recognition frame, a person skilled in the art can select a suitable machine learning algorithm in advance to construct a corresponding feature extraction model according to needs, and then perform feature extraction on the background recognition frame according to the feature extraction model obtained by training, so as to extract feature information meeting requirements.
(3) Determining time information of the vehicle and the person appearing in the video to be processed.
(4) And acquiring the associated video associated with the video to be processed.
Specifically, the associated video is a video shot by a camera which is located in the same area as the camera shooting the video to be processed but is located at a different point.
(5) And extracting the associated video frame image from the associated video provided by each point according to the point information and the time information.
For the operations of extracting the associated video frame images from the associated videos provided at the respective positions in the step (5) and further identifying the above-mentioned vehicle and person operations in the associated video frame images, the extracting manner is similar to that given in the step 102 in the first embodiment, and details are not repeated here.
(6) And traversing each associated video frame image, and recording the point locations of the vehicles and the point locations of the personnel.
Correspondingly, if the intersection of the point location where the vehicle appears and the point location where the person appears is larger than a third threshold value, determining that the vehicle and the person are associated; otherwise, determining that the vehicle and the person are not associated.
The third preset threshold may be set by a person skilled in the art according to actual needs, for example, it is set to be greater than half of the number of dots.
For ease of understanding, the following description is made in conjunction with FIG. 7:
assuming that the relevance between the vehicle a and the person B needs to be determined, after the determination given in the first, second, and third embodiments, the relevance between the vehicle a and the person B still cannot be determined, when the manner given in this embodiment is adopted for further determination, the relevance video 1 shot by the camera located at the point 1, the relevance video 2 shot by the camera located at the point 2, the relevance video 3 shot by the camera located at the point 3, and the relevance video 4 shot by the camera located at the point 4 are used.
And (4) extracting corresponding video frame images from the associated video 1, the associated video 2, the associated video 3 and the associated video 4 respectively according to the location information determined in the step (2) and the time information determined in the step (3), and then recording the point location where the vehicle appears and the point location where the person appears.
As shown in fig. 7, a vehicle a and a person B appear in the video frame image taken at point 1, a vehicle a and a person B appear in the video frame image taken at point 2, only a person B appears in the video frame image taken at point 3, and a vehicle a and a person B appear in the video frame image taken at point 4.
Based on the above stipulation that the intersection of the point location where the vehicle a appears and the point location where the person B appears is greater than the third threshold, and if the intersection is greater than half of the number of the point locations, it can be known from the above records that, when there are 4 point locations serving as the assistance in fig. 7, as long as the intersection of the point location where the vehicle a appears and the point location where the person B appears is greater than 2, that is, 3 or 4, it can be considered that the vehicle a and the person B are associated, that is, the vehicle a and the person B are associated in the example given in fig. 7.
Furthermore, it is worth mentioning that, since the probability of a single number of misjudgments is high, in order to ensure the accuracy of the result determined based on the above manner, the number of such simultaneous occurrences is required to be multiple times and frequent.
Therefore, when the vehicles and the people are in the same picture but do not meet the scene of distance judgment, the videos to be processed shot by the cameras at different points are used for assisting in judging whether the vehicles and the people are related, so that the vehicle and people information correlation method provided by the embodiment can be better suitable for the scene that the vehicles and the people are in the same picture but do not meet the scene of distance judgment, namely the vehicles and the people which are related are screened out from the scene as much as possible, and the correlation relationship between the isolated and scattered target information (vehicle information and people information) can be better analyzed.
In addition, it should be understood that the above steps of the various methods are divided for clarity, and the implementation may be combined into one step or split into some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included in the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fifth embodiment of the present invention relates to a human-vehicle information associating apparatus, as shown in fig. 8, including: an acquisition module 801, an identification module 802, an extraction module 803, a determination module 804, and an association module 805.
The acquiring module 801 is configured to acquire a video to be processed; the identification module 802 is configured to identify vehicles and people included in the video to be processed, so as to obtain a vehicle identification frame and a person identification frame; an extraction module 803, configured to perform feature extraction on the vehicle identification frame and the person identification frame, respectively, to obtain vehicle information and person information; a judging module 804, configured to judge whether the vehicle and the person are associated with each other according to the vehicle identification frame and the person identification frame; an association module 805, configured to associate the vehicle information with the person information when the vehicle and the person are associated with each other.
In addition, in another example, when the identifying module 803 identifies the vehicle and the person included in the video to be processed to obtain the vehicle identification frame and the person identification frame, specifically:
extracting a video frame image from the video to be processed;
carrying out vehicle detection on the video frame image, and when a vehicle is detected, marking all vehicles appearing in the video frame image to obtain N vehicle identification frames, wherein N is an integer greater than 0;
and carrying out human face shape detection on the video frame image, and when people are detected, marking all people appearing in the video frame image to obtain M person identification frames, wherein M is an integer larger than 0.
In addition, in another example, the human-vehicle information correlation device may further include a combination module.
Specifically, the combination module is used for combining the N vehicle identification frames and the M personnel identification frames to obtain the N multiplied by M personnel and vehicle combinations.
Accordingly, the operations performed by the decision block 804 are adapted for each human-vehicle combination. That is, the determining module 804 performs the above determining operation for each traversed human-vehicle combination by traversing N × M human-vehicle combinations.
In another example, when the determining module 804 determines whether the vehicle and the person are associated according to the vehicle identification frame and the person identification frame, specifically:
acquiring coordinate information of the vehicle identification frame and coordinate information of the personnel identification frame to obtain vehicle coordinate information and personnel coordinate information;
and judging whether the vehicle is associated with the person or not according to the vehicle coordinate information and the person coordinate information.
In addition, in another example, the vehicle coordinate information includes at least a vertex coordinate value L of an upper left corner of the vehicle recognition frame1And the vertex coordinate value R of the lower right corner1The personnel coordinate information at least comprises a vertex coordinate value L of the upper left corner of the personnel identification frame2And the vertex coordinate value R of the lower right corner2
Correspondingly, when the determining module 804 determines whether the vehicle and the person are associated according to the vehicle coordinate information and the person coordinate information, specifically:
the vertex coordinate value L1And the vertex coordinate value L2Comparing the vertex coordinate value R1And the vertex coordinate value R2Comparing;
determining the position relation between the vehicle identification frame and the personnel identification frame according to the comparison result;
and judging whether the vehicle is associated with the person or not according to the position relation.
In another example, when the determining module 804 determines the position relationship between the vehicle identification frame and the person identification frame according to the comparison result, specifically:
if the vertex coordinate value L is1Not greater than the vertex coordinate value L2And the vertex coordinate value R1Not less than the vertex coordinate value R2Or, if the vertex coordinate value L is1Not less than the vertex coordinate value L2And the vertex coordinate value R1Not greater than the vertex coordinate value R2Determining that the position relationship between the vehicle identification frame and the personnel identification frame is an inclusion relationship;
otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive relation.
In addition, in another example, after determining that the position relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, the determining module 804 is further configured to:
judging the vertex coordinate value L1Whether or not it is not greater than the vertex coordinate value L2And the vertex coordinate value R1Whether or not greater than the vertex coordinate value R2And the vertex coordinate value L2Whether or not it is smaller than the vertex coordinate value R1
If the vertex coordinate value L is1Not greater than the vertex coordinate value L2And the vertexCoordinate value R1Not greater than the vertex coordinate value R2And the vertex coordinate value L2Less than the vertex coordinate value R1Determining that the position relationship between the vehicle identification frame and the personnel identification frame is an overlapping relationship;
otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
In addition, in another example, before determining that the position relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, the determining module 804 is further configured to:
according to the vertex coordinate value L2And the vertex coordinate value R1Determining the overlapping area;
according to the vertex coordinate value L1And the vertex coordinate value R2Determining a reference area;
judging whether the ratio of the overlapping area to the reference area is larger than a first preset threshold value or not;
correspondingly, if the position relationship of the vehicle identification frame and the personnel identification frame is larger than the preset position relationship, the step of determining that the position relationship of the vehicle identification frame and the personnel identification frame is an overlapping relationship is executed; otherwise, the step of determining that the position relationship of the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
In addition, in another example, after determining that the position relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, the determining module 804 is further configured to:
judging the vertex coordinate value L2Whether or not it is not greater than the vertex coordinate value L1And the vertex coordinate value R2Whether or not greater than the vertex coordinate value R1And the vertex coordinate value L1Whether or not it is smaller than the vertex coordinate value R2
If the vertex coordinate value L is2Not greater than the vertex coordinate value L1And the vertex coordinate value R2Not greater than the vertex coordinate value R1And the vertex coordinate value L1Less than the apexIndex value R2Determining that the position relationship between the vehicle identification frame and the personnel identification frame is an overlapping relationship;
otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
In addition, in another example, before determining that the position relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, the determining module 804 is further configured to:
according to the vertex coordinate value R2And the vertex coordinate value L1Determining the overlapping area;
according to the vertex coordinate value L2And the vertex coordinate value R1Determining a reference area;
judging whether the ratio of the overlapping area to the reference area is larger than a first preset threshold value or not;
correspondingly, if the position relationship of the vehicle identification frame and the personnel identification frame is larger than the preset position relationship, the step of determining that the position relationship of the vehicle identification frame and the personnel identification frame is an overlapping relationship is executed; otherwise, the step of determining that the position relationship of the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
In another example, when the association module 805 determines whether the vehicle and the person are associated with each other according to the position relationship, specifically, the association module includes:
determining that there is an association between the vehicle and the person when the positional relationship is an inclusion relationship;
determining that there is an association between the vehicle and the person when the positional relationship is an overlapping relationship;
determining that there is no association between the vehicle and the person when the positional relationship is a non-inclusive and non-overlapping relationship.
Further, in another example, before determining that the vehicle and the person are not associated, the determining module 804 is further configured to:
according to the vertex coordinate value L1And the vertex coordinate value R1Determining the vehicle identification frameCoordinate value of center point C1
According to the vertex coordinate value L2And the vertex coordinate value R2Determining coordinate value C of center point of the personnel identification frame2
According to the central point coordinate value C1And the center point coordinate value C2Determining the distance D from the center point of the vehicle identification frame to the center point of the personnel identification frame1
Determining a diagonal distance D of the video frame image2
Judging the distance D1At a distance D from the diagonal2Whether the ratio of (a) to (b) is greater than a second preset threshold;
correspondingly, if the vehicle is larger than the person, determining that the vehicle is associated with the person; otherwise, the step of determining that there is no association between the vehicle and the person is performed.
Further, in another example, before determining that the vehicle and the person are not associated, the determining module 804 is further configured to:
expanding the vehicle identification frame and the personnel identification frame to obtain a background identification frame;
extracting features of the background recognition frame, and determining the location information of the vehicle and the personnel according to the extracted features;
determining time information of the vehicle and the person appearing in the video to be processed;
acquiring a related video related to the video to be processed, wherein the related video is a video shot by a camera which is positioned in the same area as the camera shooting the video to be processed and is positioned at different point positions;
extracting associated video frame images from associated videos provided by each point according to the point information and the time information;
traversing each associated video frame image, and recording the point locations of the vehicles and the personnel;
if the intersection of the point location where the vehicle appears and the point location where the person appears is larger than a third threshold value, determining that the vehicle and the person are associated;
otherwise, the step of determining that there is no association between the vehicle and the person is performed.
It should be understood that the present embodiment is an apparatus embodiment corresponding to the first, second, third, or fourth embodiment, and that the present embodiment can be implemented in cooperation with the first, second, third, or fourth embodiment. The related technical details mentioned in the first, second, third, or fourth embodiment are still valid in this embodiment, and are not repeated here for the sake of reducing repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first, or second, or third, or fourth embodiment.
It should be noted that, all the modules involved in this embodiment are logic modules, and in practical application, one logic unit may be one physical unit, may also be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, a unit which is not so closely related to solve the technical problem proposed by the present invention is not introduced in the present embodiment, but this does not indicate that there is no other unit in the present embodiment.
A sixth embodiment of the present application relates to a human-vehicle information associating apparatus, as shown in fig. 9, including: includes at least one processor 901; and, memory 902 communicatively connected to at least one processor 901; the memory 902 stores instructions executable by the at least one processor 901, and the instructions are executed by the at least one processor 901, so that the at least one processor 901 can execute the human-vehicle information correlation method described in the above method embodiment.
The memory 902 and the processor 901 are coupled by a bus, which may comprise any number of interconnected buses and bridges that couple one or more of the various circuits of the processor 901 and the memory 902. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 901 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 901.
The processor 901 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 902 may be used for storing data used by processor 901 in performing operations.
A seventh embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program is executed by a processor to realize the human-vehicle information correlation method described in the above method embodiment.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the present application, and that various changes in form and details may be made therein without departing from the spirit and scope of the present application in practice.

Claims (16)

1. A method for associating people and vehicles information is characterized by comprising the following steps:
acquiring a video to be processed;
identifying vehicles and personnel included in the video to be processed to obtain a vehicle identification frame and a personnel identification frame;
respectively extracting the characteristics of the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information;
judging whether the vehicle is associated with the person or not according to the vehicle identification frame and the person identification frame;
and if the association exists, associating the vehicle information with the personnel information.
2. The people-vehicle information correlation method according to claim 1, wherein identifying the vehicle and the person included in the video to be processed to obtain a vehicle identification frame and a person identification frame comprises:
extracting a video frame image from the video to be processed;
carrying out vehicle detection on the video frame image, and when a vehicle is detected, marking all vehicles appearing in the video frame image to obtain N vehicle identification frames, wherein N is an integer greater than 0;
and carrying out human face shape detection on the video frame image, and when people are detected, marking all people appearing in the video frame image to obtain M person identification frames, wherein M is an integer larger than 0.
3. The people-vehicle information association method according to claim 2, wherein before the determining whether the association between the vehicle and the person exists according to the vehicle identification box and the person identification box, the method further comprises:
combining the N vehicle identification frames and the M personnel identification frames to obtain N multiplied by M personnel and vehicle combinations;
and traversing N multiplied by M human-vehicle combinations, and executing the step of judging whether the vehicle is associated with the personnel according to the vehicle identification frame and the personnel identification frame for each traversed human-vehicle combination.
4. The people-vehicle information correlation method according to claim 3, wherein the determining whether the vehicle and the person are correlated according to the vehicle identification frame and the person identification frame comprises:
acquiring coordinate information of the vehicle identification frame and coordinate information of the personnel identification frame to obtain vehicle coordinate information and personnel coordinate information;
and judging whether the vehicle is associated with the person or not according to the vehicle coordinate information and the person coordinate information.
5. The method according to claim 4, wherein the vehicle coordinate information at least includes a vertex coordinate value L of an upper left corner of the vehicle recognition frame1And the vertex coordinate value R of the lower right corner1The personnel coordinate information at least comprises a vertex coordinate value L of the upper left corner of the personnel identification frame2And the vertex coordinate value R of the lower right corner2
The judging whether the vehicle is associated with the person according to the vehicle coordinate information and the person coordinate information comprises the following steps:
the vertex coordinate value L1And the vertex coordinate value L2Comparing the vertex coordinate value R1And the vertex coordinate value R2Comparing;
determining the position relation between the vehicle identification frame and the personnel identification frame according to the comparison result;
and judging whether the vehicle is associated with the person or not according to the position relation.
6. The people-vehicle information correlation method according to claim 5, wherein the determining the position relationship between the vehicle identification frame and the person identification frame according to the comparison result comprises:
if the vertex coordinate value L is1Not greater than the vertex coordinate value L2And the vertex coordinate value R1Is not less thanThe vertex coordinate value R2Or, if the vertex coordinate value L is1Not less than the vertex coordinate value L2And the vertex coordinate value R1Not greater than the vertex coordinate value R2Determining that the position relationship between the vehicle identification frame and the personnel identification frame is an inclusion relationship;
otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive relation.
7. The human-vehicle information correlation method according to claim 6, wherein after the determination that the positional relationship of the vehicle identification frame and the human identification frame is a non-inclusive relationship, the method further comprises:
judging the vertex coordinate value L1Whether or not it is not greater than the vertex coordinate value L2And the vertex coordinate value R1Whether or not greater than the vertex coordinate value R2And the vertex coordinate value L2Whether or not it is smaller than the vertex coordinate value R1
If the vertex coordinate value L is1Not greater than the vertex coordinate value L2And the vertex coordinate value R1Not greater than the vertex coordinate value R2And the vertex coordinate value L2Less than the vertex coordinate value R1Determining that the position relationship between the vehicle identification frame and the personnel identification frame is an overlapping relationship;
otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
8. The human-vehicle information associating method according to claim 7, wherein before the determining that the positional relationship of the vehicle identifying frame and the human identifying frame is an overlapping relationship, the method further comprises:
according to the vertex coordinate value L2And the vertex coordinate value R1Determining the overlapping area;
according to the vertex coordinate value L1And the vertex coordinate value R2Determining a reference area;
judging whether the ratio of the overlapping area to the reference area is larger than a first preset threshold value or not;
if so, executing the step of determining that the position relationship between the vehicle identification frame and the personnel identification frame is an overlapping relationship;
otherwise, the step of determining that the position relationship of the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
9. The human-vehicle information correlation method according to claim 6, wherein after the determination that the positional relationship of the vehicle identification frame and the human identification frame is a non-inclusive relationship, the method further comprises:
judging the vertex coordinate value L2Whether or not it is not greater than the vertex coordinate value L1And the vertex coordinate value R2Whether or not greater than the vertex coordinate value R1And the vertex coordinate value L1Whether or not it is smaller than the vertex coordinate value R2
If the vertex coordinate value L is2Not greater than the vertex coordinate value L1And the vertex coordinate value R2Not greater than the vertex coordinate value R1And the vertex coordinate value L1Less than the vertex coordinate value R2Determining that the position relationship between the vehicle identification frame and the personnel identification frame is an overlapping relationship;
otherwise, determining that the position relation of the vehicle identification frame and the personnel identification frame is a non-inclusive and non-overlapping relation.
10. The human-vehicle information associating method according to claim 9, wherein before the determining that the positional relationship of the vehicle identifying frame and the human identifying frame is an overlapping relationship, the method further comprises:
according to the vertex coordinate value R2And the vertex coordinate value L1Determining the overlapping area;
according to the vertex coordinate value L2And the vertex coordinate value R1Determining a reference area;
judging whether the ratio of the overlapping area to the reference area is larger than a first preset threshold value or not;
if so, executing the step of determining that the position relationship between the vehicle identification frame and the personnel identification frame is an overlapping relationship;
otherwise, the step of determining that the position relationship of the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
11. The human-vehicle information correlation method according to any one of claims 5 to 10, wherein the determining whether the vehicle and the person are correlated according to the positional relationship includes:
determining that there is an association between the vehicle and the person when the positional relationship is an inclusion relationship;
determining that there is an association between the vehicle and the person when the positional relationship is an overlapping relationship;
determining that there is no association between the vehicle and the person when the positional relationship is a non-inclusive and non-overlapping relationship.
12. The human-vehicle information correlation method of claim 11, wherein prior to the determining that there is no correlation between the vehicle and the person, the method further comprises:
according to the vertex coordinate value L1And the vertex coordinate value R1Determining the coordinate value C of the center point of the vehicle identification frame1
According to the vertex coordinate value L2And the vertex coordinate value R2Determining coordinate value C of center point of the personnel identification frame2
According to the central point coordinate value C1And the center point coordinate value C2Determining the distance D from the center point of the vehicle identification frame to the center point of the personnel identification frame1
Determining the video frame imageDiagonal distance D2
Judging the distance D1At a distance D from the diagonal2Whether the ratio of (a) to (b) is greater than a second preset threshold;
if so, determining that the vehicle is associated with the person;
otherwise, the step of determining that there is no association between the vehicle and the person is performed.
13. The human-vehicle information correlation method of claim 12, wherein prior to the determining that there is no correlation between the vehicle and the person, the method further comprises:
expanding the vehicle identification frame and the personnel identification frame to obtain a background identification frame;
extracting features of the background recognition frame, and determining the location information of the vehicle and the personnel according to the extracted features;
determining time information of the vehicle and the person appearing in the video to be processed;
acquiring a related video related to the video to be processed, wherein the related video is a video shot by a camera which is positioned in the same area as the camera shooting the video to be processed and is positioned at different point positions;
extracting associated video frame images from associated videos provided by each point according to the point information and the time information;
traversing each associated video frame image, and recording the point locations of the vehicles and the personnel;
if the intersection of the point location where the vehicle appears and the point location where the person appears is larger than a third threshold value, determining that the vehicle and the person are associated;
otherwise, the step of determining that there is no association between the vehicle and the person is performed.
14. A man-vehicle information association device, comprising:
the acquisition module is used for acquiring a video to be processed;
the identification module is used for identifying vehicles and personnel contained in the video to be processed to obtain a vehicle identification frame and a personnel identification frame;
the extraction module is used for respectively extracting the characteristics of the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information;
the judging module is used for judging whether the vehicle is associated with the person or not according to the vehicle identification frame and the person identification frame;
and the association module is used for associating the vehicle information with the personnel information when the vehicle and the personnel are associated.
15. A human-vehicle information association device, characterized by comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the human-vehicle information correlation method as claimed in any one of claims 1 to 13.
16. A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the human-vehicle information associating method according to any one of claims 1 to 13.
CN202011009511.1A 2020-09-23 2020-09-23 Man-vehicle information association method, device, equipment and storage medium Pending CN114255409A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011009511.1A CN114255409A (en) 2020-09-23 2020-09-23 Man-vehicle information association method, device, equipment and storage medium
PCT/CN2021/118538 WO2022063002A1 (en) 2020-09-23 2021-09-15 Human-vehicle information association method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011009511.1A CN114255409A (en) 2020-09-23 2020-09-23 Man-vehicle information association method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114255409A true CN114255409A (en) 2022-03-29

Family

ID=80788626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011009511.1A Pending CN114255409A (en) 2020-09-23 2020-09-23 Man-vehicle information association method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114255409A (en)
WO (1) WO2022063002A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921083B (en) * 2018-06-28 2021-07-27 浙江工业大学 Illegal mobile vendor identification method based on deep learning target detection
CN109214320A (en) * 2018-08-23 2019-01-15 中国电子科技集团公司电子科学研究院 People's vehicle correlating method and device based on video analysis
CN111063199B (en) * 2019-12-19 2021-08-06 深圳市捷顺科技实业股份有限公司 Method and device for associating vehicle with license plate and terminal equipment
CN111695429B (en) * 2020-05-15 2022-01-11 深圳云天励飞技术股份有限公司 Video image target association method and device and terminal equipment

Also Published As

Publication number Publication date
WO2022063002A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
CN108388888B (en) Vehicle identification method and device and storage medium
CN111104903B (en) Depth perception traffic scene multi-target detection method and system
CN103996041B (en) Vehicle color identification method and system based on matching
CN111837156A (en) Vehicle weight recognition techniques utilizing neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi-view vehicle representations
CN106210612A (en) Method for video coding, coding/decoding method and device thereof
KR20210101313A (en) Face recognition method, neural network training method, apparatus and electronic device
CN111291812B (en) Method and device for acquiring attribute category, storage medium and electronic device
CN112651293B (en) Video detection method for road illegal spreading event
CN102902957A (en) Video-stream-based automatic license plate recognition method
CN111931683B (en) Image recognition method, device and computer readable storage medium
Awang et al. Vehicle type classification using an enhanced sparse-filtered convolutional neural network with layer-skipping strategy
CN112613434A (en) Road target detection method, device and storage medium
Yao et al. Coupled multivehicle detection and classification with prior objectness measure
CN117157679A (en) Perception network, training method of perception network, object recognition method and device
CN111931644A (en) Method, system and equipment for detecting number of people on vehicle and readable storage medium
Nguwi et al. Number plate recognition in noisy image
CN115588188A (en) Locomotive, vehicle-mounted terminal and driver behavior identification method
CN112818826A (en) Target identification method and device, electronic equipment and storage medium
Priya et al. Intelligent parking system
CN114255409A (en) Man-vehicle information association method, device, equipment and storage medium
Tripathi et al. Automatic Number Plate Recognition System (ANPR): The Implementation
CN111161542B (en) Vehicle identification method and device
CN112801048A (en) Optimal target image identification method, device, equipment and storage medium
CN113627477A (en) Vehicle multi-attribute identification method and system
CN113989753A (en) Multi-target detection processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination