WO2022063002A1 - Human-vehicle information association method and apparatus, and device and storage medium - Google Patents

Human-vehicle information association method and apparatus, and device and storage medium Download PDF

Info

Publication number
WO2022063002A1
WO2022063002A1 PCT/CN2021/118538 CN2021118538W WO2022063002A1 WO 2022063002 A1 WO2022063002 A1 WO 2022063002A1 CN 2021118538 W CN2021118538 W CN 2021118538W WO 2022063002 A1 WO2022063002 A1 WO 2022063002A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
person
coordinate value
identification frame
vertex coordinate
Prior art date
Application number
PCT/CN2021/118538
Other languages
French (fr)
Chinese (zh)
Inventor
穆菁
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2022063002A1 publication Critical patent/WO2022063002A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • the embodiments of the present application relate to the field of artificial intelligence image recognition, and in particular, to a method, device, device, and storage medium for associating person and vehicle information.
  • the vehicle information about the vehicle and the person information about the user object are two completely independent parts, that is, there is no correlation between the vehicle and the person. Therefore, in order to identify related vehicles and people from massive video data, relevant staff need to spend a lot of time and energy.
  • An embodiment of the present application provides a method for associating person and vehicle information, including: acquiring a video to be processed; identifying vehicles and persons included in the video to be processed, and obtaining a vehicle identification frame and a person identification frame; respectively identifying the vehicle identification frame and the person
  • the frame performs feature extraction to obtain vehicle information and personnel information; according to the vehicle identification frame and the personnel identification frame, it is judged whether there is a relationship between the vehicle and the personnel; if there is an association, the vehicle information and personnel information are associated.
  • the embodiment of the present application also provides a person-vehicle information association device, including: an acquisition module for acquiring a video to be processed; an identification module for identifying vehicles and persons included in the to-be-processed video, and obtaining a vehicle identification frame and person identification
  • the extraction module is used to perform feature extraction on the vehicle identification frame and the personnel identification frame respectively to obtain vehicle information and personnel information; the judgment module is used to judge whether the vehicle and the personnel are related according to the vehicle identification frame and the personnel identification frame;
  • the module is used to associate the vehicle information with the personnel information when there is an association between the vehicle and the person.
  • Embodiments of the present application further provide a device for associating person and vehicle information, including: a memory communicatively connected to at least one processor; wherein, the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, So that at least one processor can execute the above method for associating person and vehicle information.
  • Embodiments of the present application further provide a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the above-mentioned method for associating person-vehicle information is realized.
  • FIG. 1 is a flowchart of a method for associating person and vehicle information provided according to a first embodiment of the present application
  • FIG. 2 is a schematic diagram 1 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person and vehicle information provided according to a second embodiment of the present application;
  • FIG. 3 is a schematic diagram 2 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person and vehicle information provided according to a second embodiment of the present application;
  • FIG. 4 is a schematic diagram 3 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person-vehicle information provided according to the second embodiment of the present application;
  • FIG. 5 is a schematic diagram 4 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person and vehicle information provided according to the second embodiment of the present application;
  • FIG. 6 is a schematic diagram of determining whether a vehicle and a person are associated in a method for associating person-vehicle information provided according to a third embodiment of the present application;
  • FIG. 7 is a schematic diagram of determining whether a vehicle and a person are associated in a method for associating person-vehicle information provided according to a fourth embodiment of the present application;
  • FIG. 8 is a schematic structural diagram of a device for associating person-vehicle information provided according to a fifth embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a device for associating person-vehicle information provided according to a sixth embodiment of the present application.
  • the purpose of the embodiments of the present application is to provide a method, device, device, and storage medium for associating person-vehicle information, so as to solve the above-mentioned technical problems.
  • the method, device, device and storage medium for the association of human and vehicle information proposed in this application can effectively avoid the interference of background factors by extracting features from the vehicle identification frame and the personnel identification frame calibrated in the video to be processed, thereby ensuring that the extracted The accuracy of the vehicle information and personnel information; judge whether the corresponding vehicle and personnel are related according to the calibrated vehicle identification frame and personnel identification frame, and when it is determined that the two are related, the isolated vehicle information and personnel information are analyzed.
  • finding a person by car or finding a car by person the time-consuming search and the waste of manpower and material resources are greatly reduced.
  • the first embodiment relates to a method for associating person and vehicle information.
  • the interference of background factors is effectively avoided, thereby ensuring that the extracted vehicle information and Accuracy of personnel information; by judging whether the corresponding vehicle and personnel are related according to the calibrated vehicle identification frame and personnel identification frame, and when it is determined that the two are related, the isolated vehicle information and personnel information are associated to achieve
  • the subsequent application scenarios of finding people by cars or finding cars by people the time-consuming search and the waste of manpower and material resources are greatly reduced.
  • the method for associating person and vehicle information provided in this embodiment is specifically applied to any terminal device capable of executing the method, such as a personal computer, a tablet computer, a smart phone, etc., which are not listed one by one here, and this embodiment does not limit this. .
  • Step 101 acquiring the video to be processed.
  • the above-mentioned to-be-processed video may be from surveillance cameras set up in different places, or may be from various big data platforms, which is not limited in this embodiment.
  • the above-mentioned to-be-processed videos may be videos of various formats and forms, which are also not limited in this embodiment.
  • Step 102 identify vehicles and persons included in the video to be processed, and obtain a vehicle identification frame and a person identification frame.
  • the feature extraction of the target object usually first determines the target object, then calibrates the target object, and then determines the position of the target object in the video to be processed, so as to obtain the target object of the target object. Identify the frame, and then perform feature extraction on the target object in the target recognition frame by using a preset recognition technology.
  • This embodiment is based on the artificial intelligence video recognition technology.
  • the vehicles and people included in the video to be processed are identified.
  • the corresponding vehicle identification frame and person identification frame are calibrated according to the identified vehicles and people.
  • the above-mentioned vehicles are not limited to motor vehicles in this embodiment.
  • Other vehicles with obvious characteristics such as electric vehicles, tricycles, and rickshaws, are all vehicles that need to be identified from the video to be processed.
  • the above-mentioned persons include not only drivers and passengers driving or riding the above-mentioned vehicles, but also pedestrians, that is, all persons appearing in the video to be processed belong to the vehicle that needs to be identified from the video to be processed.
  • this embodiment provides an implementation method for identifying vehicles and people from the video to be processed, and then marking the vehicle identification frame corresponding to the vehicle and the personnel identification frame corresponding to the person, as follows:
  • Extract a video frame image from the video to be processed that is, take the frame as a unit, take the image corresponding to each frame as a video frame image, or take the images corresponding to certain frames as a video frame image.
  • each frame or every few frames corresponds to a different picture
  • one frame can be selected to correspond to one video frame image.
  • Vehicle detection is performed on the video frame image, and when a vehicle is detected, all vehicles appearing in the video frame image are calibrated in the video frame image, and N vehicle identification frames are obtained.
  • a preset machine learning algorithm such as a deep convolutional neural network algorithm, can be used to train vehicle sample data and face and humanoid sample data respectively, and then obtain a vehicle recognition model for recognizing vehicles and a vehicle recognition model for recognizing people. face and humanoid recognition model.
  • N and M mentioned above are integers greater than 0, and in practical applications, the values of N and M may be the same or different.
  • Step 103 respectively perform feature extraction on the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information.
  • the vehicle information extracted from the vehicle identification frame mainly includes license plate number, model, vehicle color, vehicle brand, vehicle accessories (such as body, interior decoration, etc.), vehicle travel direction, etc. , vehicle appearance time, etc.
  • the personnel information extracted from the personnel identification frame mainly includes facial feature information, clothing color, clothing style, hairstyle, hair color, body accessories (such as watches, backpacks, eyes, etc.), personnel travel direction, personnel appearance, etc. time etc.
  • step 102 it can be known from the description of step 102 that there may be multiple vehicle identification frames and person identification frames identified from the video to be processed. Therefore, in order to determine the correlation between each vehicle and each person in the video to be processed, you can First, the identification frames corresponding to these vehicles and people are combined, that is, N vehicle identification frames and M personnel identification frames are formed, and then an N ⁇ M personal vehicle combination is obtained.
  • the N ⁇ M personal-vehicle combinations need to be traversed, that is, the feature extraction operation mentioned in step 103 needs to be performed for each traversed personal-vehicle combination.
  • Step 104 according to the vehicle identification frame and the person identification frame, determine whether there is a relationship between the vehicle and the person.
  • step 105 if it is determined that there is a relationship between the vehicle and the person through the judgment, then go to step 105, otherwise it ends directly, or a prompt is given, for example, the vehicle and the person in the currently judged person-vehicle combination are not related.
  • the specific method is to obtain the coordinate information of the vehicle identification frame and the person.
  • the coordinate information of the personnel identification frame is obtained, and then the vehicle coordinate information and the personnel coordinate information are obtained; finally, whether the vehicle and the personnel are related is judged according to the vehicle coordinate information and the personnel coordinate information.
  • the vehicle coordinate information mentioned above may be the coordinate information of the four vertices of the vehicle identification frame, specifically the coordinate value of the corresponding vertex, that is, the value corresponding to the X axis and the value corresponding to the Y axis, or the vertex in the upper left corner.
  • the coordinate value and the vertex coordinate value of the lower right corner can also be the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner.
  • the personnel coordinate information can be the coordinate information of the four vertices of any recognition frame, the vertex coordinate value of the upper left corner and the vertex coordinate value of the lower right corner, or the vertex coordinate value of the upper right corner and the vertex of the lower left corner. Coordinate value.
  • the vehicle coordinate information and the personnel coordinate information both include only two vertex coordinate values
  • the vertex coordinate values included in the vehicle coordinate information and the personnel coordinate information are the vertex coordinate values of the corresponding vertices. That is, if the vehicle coordinate information is the vertex coordinate value of the upper left corner and the vertex coordinate value of the lower right corner, the personnel coordinate information also needs to be the upper left corner. The vertex coordinate value of the corner and the vertex coordinate value of the lower right corner. Conversely, if the vehicle coordinate information is the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner, the personnel coordinate information also needs to be the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner.
  • the vehicle coordinate information at least includes the vertex coordinate value L 1 of the upper left corner of the vehicle identification frame and the vertex coordinate value R 1 of the lower right corner
  • the personnel coordinate information at least includes the vertex coordinate value L 2 of the upper left corner of the personnel identification frame. and the vertex coordinate value R 2 of the lower right corner as an example to illustrate:
  • the vertex coordinate value L1 is compared with the vertex coordinate value L2, and the vertex coordinate value R1 is compared with the vertex coordinate value R2 ; then, the positions of the vehicle identification frame and the person identification frame are determined according to the comparison result. relationship; finally, according to the position relationship, it is judged whether there is a relationship between the vehicle and the person.
  • whether the vehicle and the person are related is determined according to the positional relationship between the vehicle identification frame where the vehicle is located and the personnel identification frame where the person is located.
  • the positional relationship between the vehicle identification frame and the person identification frame is an inclusive relationship, it may be considered that the vehicle in the vehicle identification frame and the person in the person identification frame are related.
  • the vehicle identification frame and the person identification frame are related.
  • the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship, that is, the two are neither contained nor overlapped. It can be considered that the vehicle in the vehicle identification frame and the person in the personnel identification frame are not related. .
  • Step 105 associate the vehicle information with the personnel information.
  • the operation of associating the vehicle information and the personnel information in step 105 is specifically to associate the license plate number and model of the vehicle information.
  • vehicle color, vehicle brand, vehicle accessories such as body, interior decoration, etc.
  • Zhang San For a certain person Zhang San, there is a black Audi Q5 with the license plate number "123456". If Zhang San drives this car at location A at 2:00 pm on September 14, 2020, just enter the license plate number "123456", or Zhang San's ID number, or his biometric information, you can get the following person-car association information: Zhang San with ID number XXXX drives a black Audi Q5 with license plate number 123456 in September 2020 Appeared at location A at 2 pm on the 14th.
  • vehicle information and personnel information can be reasonably extracted according to usage scenarios, and corresponding systems, such as smart security systems, can be built as needed to provide more convenient services for relevant staff.
  • the method for associating person and vehicle information provided in this embodiment effectively avoids the interference of background factors by extracting features from the vehicle identification frame and the person identification frame demarcated in the video to be processed, thereby ensuring the extraction of features.
  • the isolated vehicle information and personnel information are associated, so as to realize the human-vehicle information.
  • the time-consuming search and the waste of manpower and material resources are greatly reduced.
  • the second embodiment of the present application relates to a method for associating person and vehicle information.
  • the second embodiment is mainly aimed at a specific application scenario in which the positional relationship between the vehicle identification frame and the person identification frame is determined according to the comparison result mentioned in the first embodiment. illustrate.
  • the comparison result used to determine the positional relationship between the vehicle identification frame and the person identification frame is performed by comparing the vertex coordinate value L1 of the vehicle identification frame with the vertex coordinate value L2 of the person identification frame
  • the vertex coordinate value R 1 of the frame is obtained by comparing the vertex coordinate value R 2 of the person identification frame.
  • the vertex coordinate value L 1 is greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is greater than the vertex coordinate value R 2 ; 2 The vertex coordinate value L 1 is greater than the vertex coordinate value L 2 , and the vertex coordinate value The value R 1 is equal to the vertex coordinate value R 2 ; 3 the vertex coordinate value L 1 is greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is smaller than the vertex coordinate value R 2 ; 4 The vertex coordinate value L 1 is equal to the vertex coordinate value L 2 , and the vertex coordinate value Value R 1 is greater than vertex coordinate value R 2 ; 5 vertex coordinate value L 1 is equal to vertex coordinate value L 2 , vertex coordinate value R 1 is equal to vertex coordinate value R 2 ; 6 vertex coordinate value L 1 is equal to vertex coordinate value L 2 , vertex coordinate value Value R 1 is less than vertex coordinate value R 2 ; 7 vertex coordinate value L 1 is less than vertex coordinate
  • this embodiment takes case 7 as an example, and the positional relationship between the vehicle identification frame and the person identification frame is shown in FIG. 2 .
  • the vehicle identification frame and the person are identified.
  • the positional relationship of the boxes is an inclusion relationship, specifically, the person identification box includes the vehicle identification box.
  • this embodiment takes case 3 as an example, and the positional relationship between the vehicle identification frame and the person identification frame is shown in FIG. 3 .
  • the vehicle identification frame should be larger than the person identification frame, that is, the person is located in the car, which may be the driver or the passenger, that is, the situation given in Figure 2.
  • the person identification frame circled at this time often circles the vehicle. It will appear that the person identification frame is larger than the vehicle identification frame, that is, the situation given in Figure 3.
  • the obtained comparison results are the situations 1 and 9 given above, and the following two judgment logics can be used to further judge the occurrence of Whether there is an association between vehicles and people in the same image frame.
  • Method 1 Determine whether the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and whether the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and whether the vertex coordinate value L 2 is less than the vertex coordinate value R 1 .
  • the vehicle identification frame and the vertex coordinate value are determined.
  • the positional relationship of the personnel identification frame is an overlapping relationship, that is, the situation shown in Figure 4; otherwise, the vehicle identification frame and the personnel identification frame are completely separated, and there is no overlapping area between the two, and the positional relationship between the vehicle identification frame and the personnel identification frame is determined. Is a non-inclusive and non-overlapping relationship.
  • Method 2 Determine whether the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and whether the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and whether the vertex coordinate value L 1 is less than the vertex coordinate value R 2 .
  • the vehicle identification frame and the vertex coordinate value are determined.
  • the positional relationship of the personnel identification frame is an overlapping relationship, that is, the situation shown in Figure 5; otherwise, the vehicle identification frame and the personnel identification frame are completely separated, and there is no overlapping area between the two, and the positional relationship between the vehicle identification frame and the personnel identification frame is determined. Is a non-inclusive and non-overlapping relationship.
  • the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship; otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
  • Mode 1 and Mode 2 the logic for Mode 1 and Mode 2 is roughly the same, except that the vertex coordinate values on which the overlapping area and the reference area are determined are different.
  • first preset threshold it can be set by those skilled in the art according to actual needs, for example, it is set to 20%.
  • the video frame images corresponding to adjacent frames can be used to assist the judgment to eliminate interference and ensure the final establishment of the human-vehicle association. accuracy of results.
  • the positional relationship between the vehicle identification frame and the person identification frame is determined according to the above method, so that the person-vehicle information association method provided in this embodiment can be adapted to the same frame as the person and the vehicle (the person is on the vehicle), the person and the vehicle Scenes with different frames but the same painting can better analyze the relationship between isolated and scattered target information (vehicle information and personnel information).
  • vehicle information and personnel information For example, although a user is not the owner of a vehicle, he often When the vehicle appears in the same place at the same time, when the vehicle needs to be tracked later, it is possible to track the vehicle as long as the travel trajectory of the user is obtained, thus providing more, deeper and more valuable information for the relevant staff. Clue information, thereby improving the practical value of vehicle information and personnel information.
  • the third embodiment of the present application relates to a method for associating person and vehicle information.
  • the third embodiment is mainly aimed at determining the non-inclusive and non-overlapping relationship between the vehicle identification frame and the person identification frame as mentioned in the second embodiment, that is, there is a certain distance between the vehicle identification frame and the personnel identification frame.
  • the third embodiment is to further filter out related vehicles and personnel from non-associated vehicles and personnel. For ease of understanding and description, the following description is made with reference to FIG. 6 .
  • the coordinate value C 1 of the center point of the vehicle identification frame can be determined according to the vertex coordinate value L 1 and the vertex coordinate value R 1 ; according to the vertex coordinate value L 2 and The vertex coordinate value R 2 determines the center point coordinate value C 2 of the personnel identification frame; then, according to the center point coordinate value C 1 and the center point coordinate value C 2 , the distance from the center point of the vehicle identification frame to the center point of the personnel identification frame is determined.
  • the above-mentioned diagonal distance D 2 may be determined according to the vertex coordinate value of the upper left corner of the video frame image and the vertex coordinate value of the lower right corner in practical application, or may be determined according to the upper right corner of the video frame image.
  • the vertex coordinate value and the vertex coordinate value of the lower left corner are determined, which is not limited in this embodiment.
  • the distance D 1 can be determined according to the center point coordinate value C 1 of the vehicle identification frame and the center point coordinate value C 2 of the personnel identification frame, and can also be determined according to the vehicle identification frame or the vehicle identification frame. , which is determined with the coordinate value of the coordinate point on the personnel identification frame or in the personnel identification frame, which is not limited in this embodiment.
  • the above-mentioned second preset threshold can be set by those skilled in the art according to actual needs, for example, it is set to 30%.
  • the video frame images corresponding to adjacent frames can be used to assist the judgment to eliminate interference, thereby ensuring the final establishment of the human-vehicle association. accuracy of results.
  • the vehicle identification frame and the person identification frame are non-inclusive and non-overlapping, that is, there is a certain distance between the vehicle identification frame and the personnel identification frame, and before it is determined that the vehicle and the person are not associated, the vehicle identification is judged by the distance. Whether there is a relationship between the vehicle in the frame and the person in the person identification frame, so that the method for associating the person and vehicle information provided in this embodiment can be better adapted to the scene where the person and vehicle are in different frames but the same picture, and thus can better isolate the , scattered target information (vehicle information and personnel information) to analyze the relationship between them.
  • the fourth embodiment of the present application relates to a method for associating person and vehicle information.
  • the fourth embodiment is mainly aimed at the scene in the third embodiment when it is determined that the vehicle and the person are not related according to the distance D1 and the diagonal distance D2, that is, the person and the vehicle are on the same screen, but the distance judgment is not satisfied. , that is, the fourth embodiment is to further filter out related vehicles and persons from unrelated vehicles and persons.
  • the following description is made with reference to FIG. 7 .
  • the above-mentioned operation of expanding the vehicle identification frame and the person identification frame is to expand the vehicle identification frame and the person identification frame outward to a preset size.
  • the above-mentioned background identification frame may include a vehicle identification frame and a person identification frame; it may also be expanded for the vehicle identification frame and the personnel identification frame respectively, so as to obtain the background corresponding to the vehicle identification frame.
  • the background identification frame corresponding to the identification frame and the person identification frame.
  • the background recognition frame includes a vehicle recognition frame and a person recognition frame.
  • those skilled in the art can pre-select a suitable machine learning algorithm to construct a corresponding feature extraction model as needed, and then perform feature extraction on the background recognition frame according to the feature extraction model obtained by training, so as to extract The characteristic information that meets the requirements.
  • the associated video is a video captured by a camera located in the same area as the camera that shoots the video to be processed, but located at a different point.
  • step (5) The operation of extracting the associated video frame image from the associated video provided by each point in step (5), and then identifying the above-mentioned vehicles and personnel in the associated video frame image, is the same as that of step 102 in the first embodiment.
  • the extraction method given in is similar, and will not be repeated here.
  • the intersection of the point where the vehicle appears and the point where the person appears is greater than the third threshold, it is determined that the vehicle and the person are associated; otherwise, it is determined that the vehicle and the person are not associated.
  • the above-mentioned third preset threshold can be set by those skilled in the art according to actual needs, for example, it is set to be greater than half of the number of points.
  • the associated video 1 captured by the camera at point 1
  • the associated video 2 captured by the camera at point 2
  • the associated video 3 captured by the camera at point 3
  • the associated video at Associated video 4 taken by the camera at point 4.
  • the corresponding video frame images are extracted from the associated video 1, associated video 2, associated video 3 and associated video 4, respectively, and then record the point where the vehicle appears. position and the point of presence of personnel.
  • vehicle A and person B appear in the video frame image captured at point 1
  • vehicle A and person B appear in the video frame image captured at point 2
  • only in the video frame image captured at point 3 Person B appears
  • vehicle A and person B appear in the video frame image captured at point 4.
  • the intersection of the point where the vehicle A appears and the point where the person B appears is greater than the third threshold, if it is greater than half of the number of points, it can be seen from the above records that there are 4 auxiliary points in Figure 7 when there are 4 , as long as the intersection of the point where vehicle A appears and the point where person B appears is greater than 2, that is, 3 or 4, it can be considered that vehicle A and person B are related, that is, in the example given in Figure 7, vehicle A and person B exist association.
  • the to-be-processed video shot by the cameras at different points is used to assist in determining whether the current vehicle and the person are related, so that the person provided by this embodiment is provided.
  • the vehicle information association method can be better adapted to the scene where people and vehicles are in the same picture, but does not meet the distance judgment, that is, the related vehicles and people can be screened out from the scene as much as possible, so as to better isolate and scattered.
  • the target information (vehicle information and personnel information) is analyzed to analyze the relationship between them.
  • the fifth embodiment of the present application relates to a device for associating person and vehicle information, as shown in FIG.
  • the acquisition module 801 is used to acquire the video to be processed; the identification module 802 is used to identify the vehicles and people included in the video to be processed, and obtain the vehicle identification frame and the personnel identification frame; the extraction module 803 is used to respectively identify the vehicle identification frame Perform feature extraction with the personnel identification frame to obtain vehicle information and personnel information; the judgment module 804 is used to judge whether the vehicle and personnel are related according to the vehicle identification frame and the personnel identification frame; the association module 805 is used for the existence of association between vehicles and personnel. Yes, associate vehicle information with personnel information.
  • the identification module 803 identifies the vehicles and people included in the video to be processed, and obtains the vehicle identification frame and the personnel identification frame, the specific steps are:
  • Vehicle detection is performed on the video frame image, and when a vehicle is detected, all vehicles appearing in the video frame image are calibrated in the video frame image, and N vehicle identification frames are obtained, where N is an integer greater than 0;
  • the device for associating person-vehicle information may further include a combination module.
  • the combination module is used to combine the N vehicle identification frames and the M personnel identification frames to obtain an N ⁇ M personal vehicle combination.
  • the operations performed by the determination module 804 are adapted to each person-vehicle combination. That is, the judging module 804 performs the above-mentioned judging operation for each traversed personal-vehicle combination by traversing N ⁇ M personal-vehicle combinations.
  • the judging module 804 judges whether there is a relationship between the vehicle and the person according to the vehicle identification frame and the person identification frame, it is specifically:
  • the vehicle coordinate information and the personnel coordinate information it is determined whether there is a relationship between the vehicle and the person.
  • the vehicle coordinate information includes at least the vertex coordinate value L 1 of the upper left corner of the vehicle identification frame and the vertex coordinate value R 1 of the lower right corner
  • the personnel coordinate information at least includes the vertex coordinate value L 2 of the upper left corner of the personnel identification frame. and the vertex coordinate value R 2 of the lower right corner.
  • the judgment module 804 judges whether the vehicle and the person are related according to the vehicle coordinate information and the personnel coordinate information, it is specifically:
  • the determination module 804 determines the positional relationship between the vehicle identification frame and the person identification frame according to the comparison result, it is specifically:
  • the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not less than the vertex coordinate value R 2 , or, if the vertex coordinate value L 1 is not less than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an inclusive relationship;
  • the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship.
  • the judging module 804 is further configured to perform the following operations:
  • the vehicle identification frame and the person identification frame are determined.
  • the positional relationship of is an overlapping relationship
  • the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
  • the judgment module 804 is further configured to perform the following operations:
  • the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship is performed; otherwise, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
  • the judging module 804 is further configured to perform the following operations:
  • the vehicle identification frame and the person identification frame are determined.
  • the positional relationship of is an overlapping relationship
  • the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
  • the judgment module 804 is further configured to perform the following operations:
  • the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship is performed; otherwise, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
  • association module 805 determines whether there is an association between the vehicle and the person according to the positional relationship, it is specifically:
  • the judging module 804 is further configured to perform the following operations:
  • center point coordinate value C 1 and the center point coordinate value C 2 determine the distance D 1 from the center point of the vehicle identification frame to the center point of the person identification frame;
  • the judgment module 804 is further configured to perform the following operations:
  • an associated video associated with the video to be processed where the associated video is a video captured by a camera located in the same area as the camera that shoots the video to be processed, but located at a different point;
  • intersection of the point where the vehicle appears and the point where the person appears is greater than the third threshold, it is determined that the vehicle and the person are associated;
  • this embodiment is a device embodiment corresponding to the first, or the second, or the third, or the fourth embodiment, and this embodiment may be the same as the first, or the second, or the third, or the The four embodiments are implemented in cooperation with each other.
  • the related technical details mentioned in the first, or the second, or the third, or the fourth embodiment are still valid in this embodiment, and are not repeated here in order to reduce repetition.
  • the related technical details mentioned in this embodiment can also be applied to the first, or the second, or the third, or the fourth embodiment.
  • a logical unit may be a physical unit, a part of a physical unit, or multiple physical units.
  • a composite implementation of the unit in order to highlight the innovative part of the present application, this embodiment does not introduce units that are not closely related to solving the technical problem raised by the present application, but this does not mean that there are no other units in this embodiment.
  • the sixth embodiment of the present application relates to a person-vehicle information association device, as shown in FIG. 9 , comprising: at least one processor 901 ; and a memory 902 communicatively connected to the at least one processor 901 ; wherein, the memory 902 stores There are instructions executable by the at least one processor 901, and the instructions are executed by the at least one processor 901, so that the at least one processor 901 can execute the method for associating person-vehicle information described in the above method embodiments.
  • the memory 902 and the processor 901 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors 901 and various circuits of the memory 902 together.
  • the bus may also connect together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein.
  • the bus interface provides the interface between the bus and the transceiver.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other devices over a transmission medium.
  • the data processed by the processor 901 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor 901 .
  • Processor 901 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management, and other control functions.
  • the memory 902 may be used to store data used by the processor 901 when performing operations.
  • a seventh embodiment of the present application relates to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the method for associating person-vehicle information described in the above method embodiments is implemented.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The embodiments of the present application relate to the field of artificial intelligence image identification. Disclosed are a human-vehicle information association method and apparatus, and a device and a storage medium The human-vehicle information association method of the present application comprises: acquiring a video to be processed; identifying a vehicle and a person comprised in said video, so as to obtain a vehicle identification box and a person identification box; respectively performing feature extraction on the vehicle identification box and the person identification box to obtain vehicle information and person information; determining, according to the vehicle identification box and the person identification box, whether there is an association between the vehicle and the person; and if so, associating the vehicle information with the person information.

Description

人车信息关联方法、装置、设备及存储介质Person-vehicle information association method, device, device and storage medium
交叉引用cross reference
本申请基于申请号为“202011009511.1”、申请日为2020年09月23日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。This application is based on the Chinese patent application with the application number "202011009511.1" and the application date is September 23, 2020, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated by reference. Apply.
技术领域technical field
本申请实施例涉及人工智能图像识别领域,特别涉及一种人车信息关联方法、装置、设备及存储介质。The embodiments of the present application relate to the field of artificial intelligence image recognition, and in particular, to a method, device, device, and storage medium for associating person and vehicle information.
背景技术Background technique
随着人工智能技术的发展,智能视频安全系统急速发展。目前,智能视频安全系统已成为了国家信息化建设、国民(社区)安全建设等的重要内容。同时,随着机器学习技术的发展,基于深度学习的车辆、车牌、行人、人脸等识别技术目前也逐渐成熟。这些技术为智慧安防工作提供了十分便利的支持,也在智慧安防工作中获得的了很大的成功,取得了卓越的成功。With the development of artificial intelligence technology, intelligent video security systems have developed rapidly. At present, the intelligent video security system has become an important part of national informatization construction and national (community) security construction. At the same time, with the development of machine learning technology, vehicle, license plate, pedestrian, face and other recognition technologies based on deep learning are gradually mature. These technologies provide very convenient support for smart security work, and have also achieved great success in smart security work, and achieved outstanding success.
但是,在智慧安防系统中,关于车辆的车辆信息和关于用户对象的人员信息是两个完全独立的部分,即车辆和人员之间并未建立关联性。因而想要从海量视频数据中识别出存在关联的车辆和人员,需要相关工作人员花费大量时间和精力。However, in the smart security system, the vehicle information about the vehicle and the person information about the user object are two completely independent parts, that is, there is no correlation between the vehicle and the person. Therefore, in order to identify related vehicles and people from massive video data, relevant staff need to spend a lot of time and energy.
因此,如何将车辆和人员建立关联,以降低查找耗时和人力物力的浪费的问题亟需解决。Therefore, it is urgent to solve the problem of how to associate vehicles and personnel to reduce the time-consuming search and waste of human and material resources.
发明内容SUMMARY OF THE INVENTION
本申请的实施例提供了一种人车信息关联方法,包括:获取待处理视频;识别待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框;分别对 车辆识别框和人员识别框进行特征提取,得到车辆信息和人员信息;根据车辆识别框和人员识别框,判断车辆和人员是否存在关联;若存在关联,则将车辆信息和人员信息进行关联。An embodiment of the present application provides a method for associating person and vehicle information, including: acquiring a video to be processed; identifying vehicles and persons included in the video to be processed, and obtaining a vehicle identification frame and a person identification frame; respectively identifying the vehicle identification frame and the person The frame performs feature extraction to obtain vehicle information and personnel information; according to the vehicle identification frame and the personnel identification frame, it is judged whether there is a relationship between the vehicle and the personnel; if there is an association, the vehicle information and personnel information are associated.
本申请实施例还提供了一种人车信息关联装置,包括:获取模块,用于获取待处理视频;识别模块,用于识别待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框;提取模块,用于分别对车辆识别框和人员识别框进行特征提取,得到车辆信息和人员信息;判断模块,用于根据车辆识别框和人员识别框,判断车辆和人员是否存在关联;关联模块,用于在车辆和人员存在关联是,将车辆信息和人员信息进行关联。The embodiment of the present application also provides a person-vehicle information association device, including: an acquisition module for acquiring a video to be processed; an identification module for identifying vehicles and persons included in the to-be-processed video, and obtaining a vehicle identification frame and person identification The extraction module is used to perform feature extraction on the vehicle identification frame and the personnel identification frame respectively to obtain vehicle information and personnel information; the judgment module is used to judge whether the vehicle and the personnel are related according to the vehicle identification frame and the personnel identification frame; The module is used to associate the vehicle information with the personnel information when there is an association between the vehicle and the person.
本申请实施例还提供了一种人车信息关联设备,包括:与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行如上的人车信息关联方法。Embodiments of the present application further provide a device for associating person and vehicle information, including: a memory communicatively connected to at least one processor; wherein, the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, So that at least one processor can execute the above method for associating person and vehicle information.
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述的人车信息关联方法。Embodiments of the present application further provide a computer-readable storage medium storing a computer program. When the computer program is executed by the processor, the above-mentioned method for associating person-vehicle information is realized.
附图说明Description of drawings
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定。One or more embodiments are exemplified by the pictures in the corresponding drawings, and these exemplified descriptions do not constitute limitations on the embodiments.
图1是根据本申请第一实施例提供的人车信息关联方法的流程图;1 is a flowchart of a method for associating person and vehicle information provided according to a first embodiment of the present application;
图2是根据本申请第二实施例提供的人车信息关联方法中确定车辆识别框与人员识别框的位置关系的示意图一;2 is a schematic diagram 1 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person and vehicle information provided according to a second embodiment of the present application;
图3是根据本申请第二实施例提供的人车信息关联方法中确定车辆识别框与人员识别框的位置关系的示意图二;3 is a schematic diagram 2 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person and vehicle information provided according to a second embodiment of the present application;
图4是根据本申请第二实施例提供的人车信息关联方法中确定车辆识别框与人员识别框的位置关系的示意图三;4 is a schematic diagram 3 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person-vehicle information provided according to the second embodiment of the present application;
图5是根据本申请第二实施例提供的人车信息关联方法中确定车辆识别框与人员识别框的位置关系的示意图四;5 is a schematic diagram 4 of determining the positional relationship between a vehicle identification frame and a person identification frame in a method for associating person and vehicle information provided according to the second embodiment of the present application;
图6是根据本申请第三实施例提供的人车信息关联方法中确定车辆和人员 是否存在关联的示意图;6 is a schematic diagram of determining whether a vehicle and a person are associated in a method for associating person-vehicle information provided according to a third embodiment of the present application;
图7是根据本申请第四实施例提供的人车信息关联方法中确定车辆和人员是否存在关联的示意图;7 is a schematic diagram of determining whether a vehicle and a person are associated in a method for associating person-vehicle information provided according to a fourth embodiment of the present application;
图8是根据本申请第五实施例提供的人车信息关联装置的结构示意图;8 is a schematic structural diagram of a device for associating person-vehicle information provided according to a fifth embodiment of the present application;
图9是根据本申请第六实施例提供的人车信息关联设备的结构示意图。FIG. 9 is a schematic structural diagram of a device for associating person-vehicle information provided according to a sixth embodiment of the present application.
具体实施方式detailed description
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。In order to make the objectives, technical solutions and advantages of the embodiments of the present application more clear, each embodiment of the present application will be described in detail below with reference to the accompanying drawings. However, those of ordinary skill in the art can understand that, in each embodiment of the present application, many technical details are provided for the reader to better understand the present application. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solutions claimed in the present application can be realized. The following divisions of the various embodiments are for the convenience of description, and should not constitute any limitation on the specific implementation of the present application, and the various embodiments may be combined with each other and referred to each other on the premise of not contradicting each other.
本申请实施例的目的在于提供一种人车信息关联方法、装置、设备及存储介质,旨在解决上述技术问题。The purpose of the embodiments of the present application is to provide a method, device, device, and storage medium for associating person-vehicle information, so as to solve the above-mentioned technical problems.
本申请提出的人车信息关联方法、装置、设备及存储介质,通过从待处理视频中标定出的车辆识别框和人员识别框进行特征提取,有效避免了背景因素的干扰,从而保证了提取出的车辆信息和人员信息的准确性;通过根据标定的车辆识别框和人员识别框来判断对应的车辆和人员是否存在关联,并在确定两者存在关联时,将孤立的车辆信息和人员信息进行关联,从而实现了人车信息关联,在后续以车找人或以人找车的应用场景中,大大减低了查找耗时和人力物力的浪费。The method, device, device and storage medium for the association of human and vehicle information proposed in this application can effectively avoid the interference of background factors by extracting features from the vehicle identification frame and the personnel identification frame calibrated in the video to be processed, thereby ensuring that the extracted The accuracy of the vehicle information and personnel information; judge whether the corresponding vehicle and personnel are related according to the calibrated vehicle identification frame and personnel identification frame, and when it is determined that the two are related, the isolated vehicle information and personnel information are analyzed. In the subsequent application scenarios of finding a person by car or finding a car by person, the time-consuming search and the waste of manpower and material resources are greatly reduced.
第一实施例涉及一种人车信息关联方法,通过从待处理视频中标定出的车辆识别框和人员识别框进行特征提取,有效避免了背景因素的干扰,从而保证了提取出的车辆信息和人员信息的准确性;通过根据标定的车辆识别框和人员识别框来判断对应的车辆和人员是否存在关联,并在确定两者存在关联时,将孤立的车辆信息和人员信息进行关联,从而实现了人车信息关联,在后续以车找人或以人找车的应用场景中,大大减低了查找耗时和人力物力的浪费。The first embodiment relates to a method for associating person and vehicle information. By performing feature extraction from the vehicle identification frame and the person identification frame demarcated in the video to be processed, the interference of background factors is effectively avoided, thereby ensuring that the extracted vehicle information and Accuracy of personnel information; by judging whether the corresponding vehicle and personnel are related according to the calibrated vehicle identification frame and personnel identification frame, and when it is determined that the two are related, the isolated vehicle information and personnel information are associated to achieve In the subsequent application scenarios of finding people by cars or finding cars by people, the time-consuming search and the waste of manpower and material resources are greatly reduced.
下面对本实施例的人车信息关联方法的实现细节进行说明,以下内容仅为方便理解而提供的实现细节,并非实施本方案的必须。The implementation details of the method for associating person-vehicle information in this embodiment are described below. The following content is only provided for the convenience of understanding, and is not necessary for implementing this solution.
本实施提供的人车信息关联方法具体是应用于能够执行该方法的任意终端设备,比如个人计算机、平板电脑、智能手机等,此处不再一一列举,本实施例对此也不做限制。The method for associating person and vehicle information provided in this embodiment is specifically applied to any terminal device capable of executing the method, such as a personal computer, a tablet computer, a smart phone, etc., which are not listed one by one here, and this embodiment does not limit this. .
本实施例的具体流程如图1所示,具体包括以下步骤:The specific process of this embodiment is shown in Figure 1, which specifically includes the following steps:
步骤101,获取待处理视频。 Step 101, acquiring the video to be processed.
具体的说,上述所说的待处理视频可以是来自不同场所设置的监控摄像头,也可以是来自各种大数据平台,本实施例对此不做限制。Specifically, the above-mentioned to-be-processed video may be from surveillance cameras set up in different places, or may be from various big data platforms, which is not limited in this embodiment.
此外,上述所说的待处理视频可以是各种格式、各种形式的视频,本实施例对此同样不做限制。In addition, the above-mentioned to-be-processed videos may be videos of various formats and forms, which are also not limited in this embodiment.
步骤102,识别待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框。 Step 102 , identify vehicles and persons included in the video to be processed, and obtain a vehicle identification frame and a person identification frame.
应当理解的是,由于在实际作业中,对目标对象的特征提取,通常是先确定目标对象,然后对目标对象进行标定,进而确定目标对象在待处理视频中的位置,以获得目标对象的目标识别框,然后采用预设的识别技术对目标识别框中的目标对象进行特征提取。It should be understood that, in the actual operation, the feature extraction of the target object usually first determines the target object, then calibrates the target object, and then determines the position of the target object in the video to be processed, so as to obtain the target object of the target object. Identify the frame, and then perform feature extraction on the target object in the target recognition frame by using a preset recognition technology.
故而,为了实现人车信息关联,需要采用预设的识别技术,本实施例具体是基于人工智能视频识别技术,通过对待处理视频的分析处理,进而识别出待处理视频中包括的车辆和人员,最终根据识别出的车辆和人员标定出对应的车辆识别框和人员识别框。Therefore, in order to realize the association of human-vehicle information, it is necessary to adopt a preset recognition technology. This embodiment is based on the artificial intelligence video recognition technology. Through the analysis and processing of the video to be processed, the vehicles and people included in the video to be processed are identified. Finally, the corresponding vehicle identification frame and person identification frame are calibrated according to the identified vehicles and people.
关于上述所说的车辆,在本实施例中不仅仅受限于机动车辆,其他具有明显特征的车辆,如电动车、三轮车、人力车等均属于需要从待处理视频中识别出的车辆。The above-mentioned vehicles are not limited to motor vehicles in this embodiment. Other vehicles with obvious characteristics, such as electric vehicles, tricycles, and rickshaws, are all vehicles that need to be identified from the video to be processed.
相应地,上述所说的人员,不仅包括驾驶或乘坐上述所说的车辆的驾驶员、乘客,还包括行人,即出现在待处理视频中的人均属于需要从待处理视频中识别出的车辆。Correspondingly, the above-mentioned persons include not only drivers and passengers driving or riding the above-mentioned vehicles, but also pedestrians, that is, all persons appearing in the video to be processed belong to the vehicle that needs to be identified from the video to be processed.
为了便于理解,本实施例给出一种从待处理视频中识别出车辆和人员,进而标记处车辆对应的车辆识别框,人员对应的人员识别框的实现方式,具体如下:For ease of understanding, this embodiment provides an implementation method for identifying vehicles and people from the video to be processed, and then marking the vehicle identification frame corresponding to the vehicle and the personnel identification frame corresponding to the person, as follows:
(1)从待处理视频中提取视频帧图像,即以帧为单位,将每一帧对应的图像作为一个视频帧图像,或者以某几帧对应的图像作为一个视频帧图像。(1) Extract a video frame image from the video to be processed, that is, take the frame as a unit, take the image corresponding to each frame as a video frame image, or take the images corresponding to certain frames as a video frame image.
具体的说,在实际应用中,在待处理视频画面变化较小,即数十帧,甚至更多帧对应的画面都相同,此时可以选择将这些帧,即对应同一画面的帧合并为一帧,获得一个视频帧图像。Specifically, in practical applications, when the video picture to be processed has a small change, that is, dozens of frames or even more frames correspond to the same picture, at this time, you can choose to combine these frames, that is, the frames corresponding to the same picture into one frame to obtain a video frame image.
相应地,在待处理视频画面变化较大,即每一帧或者每几帧就会对应不同的画面,此时可以选择一帧对应一个视频帧图像。Correspondingly, when the video picture to be processed changes greatly, that is, each frame or every few frames corresponds to a different picture, at this time, one frame can be selected to correspond to one video frame image.
(2)对视频帧图像进行车辆检测,并在检测到车辆时,在视频帧图像中标定视频帧图像中出现的所有车辆,得到N个车辆识别框。(2) Vehicle detection is performed on the video frame image, and when a vehicle is detected, all vehicles appearing in the video frame image are calibrated in the video frame image, and N vehicle identification frames are obtained.
(3)对视频帧图像进行人脸人形检测,并在检测到人员时,在视频帧图像中标定视频帧图像中出现的所有人员,得到M个人员识别框。(3) Perform face and human shape detection on the video frame image, and when a person is detected, all persons appearing in the video frame image are calibrated in the video frame image, and M person identification frames are obtained.
应当理解的是,由于一个待处理视频通常可以提取出多张视频帧图像,因而在具体应用中,需要对每一张视频帧图像进行上述所说的车辆检测和人脸人形检测。It should be understood that since a video to be processed can usually extract multiple video frame images, in a specific application, it is necessary to perform the above-mentioned vehicle detection and face and human shape detection on each video frame image.
为了便于实现,可以采用预设的机器学习算法,比如深度卷积神经网络算法,分别对车辆样本数据和人脸人形样本数据进行训练,进而获得用于识别车 辆的车辆识别模型和用于识别人员的人脸人形识别模型。In order to facilitate implementation, a preset machine learning algorithm, such as a deep convolutional neural network algorithm, can be used to train vehicle sample data and face and humanoid sample data respectively, and then obtain a vehicle recognition model for recognizing vehicles and a vehicle recognition model for recognizing people. face and humanoid recognition model.
此外,应当理解的是,由于图像识别技术以及相对普及,本实施例对其具体识别方式不再赘述,本领域技术人员可以根据需要选择合适的识别技术对待处理视频进行识别,以识别出待处理视频中包括的车辆和人员,进而确定对应的车辆识别框和人员识别框。In addition, it should be understood that, due to the relative popularity of image recognition technology, the specific recognition method will not be repeated in this embodiment, and those skilled in the art can select an appropriate recognition technology to recognize the video to be processed according to needs, so as to identify the video to be processed. Vehicles and people included in the video, and then determine the corresponding vehicle identification frame and person identification frame.
此外,关于上述所说的N和M均为大于0的整数,并且在实际应用中,N和M的取值可以相同,也可以不相同。In addition, both N and M mentioned above are integers greater than 0, and in practical applications, the values of N and M may be the same or different.
步骤103,分别对车辆识别框和人员识别框进行特征提取,得到车辆信息和人员信息。 Step 103 , respectively perform feature extraction on the vehicle identification frame and the personnel identification frame to obtain vehicle information and personnel information.
具体的说,在本实施例中,从车辆识别框中提取出的车辆信息主要包括车牌号、车型、车辆颜色、车辆品牌、车辆附属物(如车身、车内的装饰等)、车辆行进方向、车辆出现时间等。Specifically, in this embodiment, the vehicle information extracted from the vehicle identification frame mainly includes license plate number, model, vehicle color, vehicle brand, vehicle accessories (such as body, interior decoration, etc.), vehicle travel direction, etc. , vehicle appearance time, etc.
相应地,从人员识别框中提取出的人员信息主要包括人脸特征信息、服装颜色、服装样式、发型、发色、身体附属物(如手表、背包、眼睛等)、人员行进方向、人员出现时间等。Correspondingly, the personnel information extracted from the personnel identification frame mainly includes facial feature information, clothing color, clothing style, hairstyle, hair color, body accessories (such as watches, backpacks, eyes, etc.), personnel travel direction, personnel appearance, etc. time etc.
此外,通过步骤102的描述可知,从待处理视频中识别出的车辆识别框和人员识别框可能有多个,因此为了确定待处理视频中每一车辆和每一人员之间的关联性,可以先将这些车辆和人员对应的识别框进行组合,即对N个车辆识别框和M个人员识别框进行组成,进而得到N×M个人车组合。In addition, it can be known from the description of step 102 that there may be multiple vehicle identification frames and person identification frames identified from the video to be processed. Therefore, in order to determine the correlation between each vehicle and each person in the video to be processed, you can First, the identification frames corresponding to these vehicles and people are combined, that is, N vehicle identification frames and M personnel identification frames are formed, and then an N×M personal vehicle combination is obtained.
在得到N×M个人车组合后,需要遍历N×M个人车组合,即需要对遍历到的每一个人车组合,执行步骤103中所说的特征提取操作。After the N×M personal-vehicle combinations are obtained, the N×M personal-vehicle combinations need to be traversed, that is, the feature extraction operation mentioned in step 103 needs to be performed for each traversed personal-vehicle combination.
步骤104,根据车辆识别框和人员识别框,判断车辆和人员是否存在关联。 Step 104 , according to the vehicle identification frame and the person identification frame, determine whether there is a relationship between the vehicle and the person.
具体的说,若通过判断,确定车辆和人员存在关联,则进入步骤105,否则直接结束,或者作出提示,如当前判断的人车组合中的车辆和人员不存在关联。Specifically, if it is determined that there is a relationship between the vehicle and the person through the judgment, then go to step 105, otherwise it ends directly, or a prompt is given, for example, the vehicle and the person in the currently judged person-vehicle combination are not related.
本实施例中,在根据每一个人车组合中提取出的车辆识别框和人员识别框,判断当前人车组合中的车辆和人员是否存在关联时,具体是通过获取车辆识别框的坐标信息和人员识别框的坐标信息,进而获得车辆坐标信息和人员坐标信息;最后根据车辆坐标信息和人员坐标信息来判断车辆和人员是否存在关联。In this embodiment, when judging whether the vehicle and the person in the current person-vehicle combination are related according to the vehicle identification frame and the person identification frame extracted from each person-vehicle combination, the specific method is to obtain the coordinate information of the vehicle identification frame and the person. The coordinate information of the personnel identification frame is obtained, and then the vehicle coordinate information and the personnel coordinate information are obtained; finally, whether the vehicle and the personnel are related is judged according to the vehicle coordinate information and the personnel coordinate information.
关于上述所说的车辆坐标信息可以是车辆识别框的四个顶点的坐标信息,具体为对应顶点的坐标值,即在X轴对应的值和Y轴对应的值,也可以是左上角的顶点坐标值和右下角的顶点坐标值,还可以是右上角的顶点坐标值和左下角的顶点坐标值。The vehicle coordinate information mentioned above may be the coordinate information of the four vertices of the vehicle identification frame, specifically the coordinate value of the corresponding vertex, that is, the value corresponding to the X axis and the value corresponding to the Y axis, or the vertex in the upper left corner. The coordinate value and the vertex coordinate value of the lower right corner can also be the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner.
相应地,人员坐标信息可以是任意识别框的四个顶点的坐标信息,也可以是左上角的顶点坐标值和右下角的顶点坐标值,还可以是右上角的顶点坐标值和左下角的顶点坐标值。Correspondingly, the personnel coordinate information can be the coordinate information of the four vertices of any recognition frame, the vertex coordinate value of the upper left corner and the vertex coordinate value of the lower right corner, or the vertex coordinate value of the upper right corner and the vertex of the lower left corner. Coordinate value.
此外,值得一提的是,为了便于根据车辆坐标信息和人员坐标信息来判断车辆和人员是否存在关联,在具体应用中,若车辆坐标信息和人员坐标信息均只包括两个顶点坐标值,需要保证车辆坐标信息和人员坐标信息中包括的顶点 坐标值是对应顶点的顶点坐标值,即如果车辆坐标信息是左上角的顶点坐标值和右下角的顶点坐标值,则人员坐标信息同样需要是左上角的顶点坐标值和右下角的顶点坐标值。反之,若车辆坐标信息是右上角的顶点坐标值和左下角的顶点坐标值,则人员坐标信息同样需要是右上角的顶点坐标值和左下角的顶点坐标值。In addition, it is worth mentioning that, in order to facilitate the determination of whether the vehicle and the person are related according to the vehicle coordinate information and the personnel coordinate information, in a specific application, if the vehicle coordinate information and the personnel coordinate information both include only two vertex coordinate values, it is necessary to Ensure that the vertex coordinate values included in the vehicle coordinate information and the personnel coordinate information are the vertex coordinate values of the corresponding vertices. That is, if the vehicle coordinate information is the vertex coordinate value of the upper left corner and the vertex coordinate value of the lower right corner, the personnel coordinate information also needs to be the upper left corner. The vertex coordinate value of the corner and the vertex coordinate value of the lower right corner. Conversely, if the vehicle coordinate information is the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner, the personnel coordinate information also needs to be the vertex coordinate value of the upper right corner and the vertex coordinate value of the lower left corner.
为了便于说明,本实施例以车辆坐标信息至少包括车辆识别框左上角的顶点坐标值L 1和右下角的顶点坐标值R 1,人员坐标信息至少包括人员识别框左上角的顶点坐标值L 2和右下角的顶点坐标值R 2为例进行说明: For convenience of description, in this embodiment, the vehicle coordinate information at least includes the vertex coordinate value L 1 of the upper left corner of the vehicle identification frame and the vertex coordinate value R 1 of the lower right corner, and the personnel coordinate information at least includes the vertex coordinate value L 2 of the upper left corner of the personnel identification frame. and the vertex coordinate value R 2 of the lower right corner as an example to illustrate:
具体的,首先,将顶点坐标值L 1与顶点坐标值L 2进行比较,将顶点坐标值R 1与顶点坐标值R 2进行比较;然后,根据比较结果确定车辆识别框与人员识别框的位置关系;最后,根据位置关系,判断车辆和人员是否存在关联。 Specifically, first , the vertex coordinate value L1 is compared with the vertex coordinate value L2, and the vertex coordinate value R1 is compared with the vertex coordinate value R2 ; then, the positions of the vehicle identification frame and the person identification frame are determined according to the comparison result. relationship; finally, according to the position relationship, it is judged whether there is a relationship between the vehicle and the person.
也就是说,车辆和人员是否存在关联性,是根据车辆所在的车辆识别框和人员所在的人员识别框的位置关系决定的。That is to say, whether the vehicle and the person are related is determined according to the positional relationship between the vehicle identification frame where the vehicle is located and the personnel identification frame where the person is located.
比如说,在车辆识别框和人员识别框的位置关系是包含关系时,可以认为车辆识别框中的车辆和人员识别框中的人员存在关联。For example, when the positional relationship between the vehicle identification frame and the person identification frame is an inclusive relationship, it may be considered that the vehicle in the vehicle identification frame and the person in the person identification frame are related.
还比如说,在车辆识别框和人员识别框的位置关系是重叠关系,即非包含,但是存在部分重叠时,可以认为车辆识别框中的车辆和人员识别框中的人员存在关联。For another example, when the positional relationship between the vehicle identification frame and the person identification frame is overlapping, that is, not inclusive, but partially overlapping, it can be considered that the vehicle in the vehicle identification frame and the person in the person identification frame are related.
还比如说,在车辆识别框和人员识别框的位置关系是非包含且非重叠关系,即两者既不包含也不重叠,可以认为车辆识别框中的车辆和人员识别框中的人员不存在关联。For another example, the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship, that is, the two are neither contained nor overlapped. It can be considered that the vehicle in the vehicle identification frame and the person in the personnel identification frame are not related. .
步骤105,将车辆信息和人员信息进行关联。 Step 105, associate the vehicle information with the personnel information.
具体的说,在车辆识别框中的车辆和人员识别框中的人员存在关联时,步骤105中所说的将车辆信息和人员信息进行关联的操作,具体是将车辆信息中的车牌号、车型、车辆颜色、车辆品牌、车辆附属物(如车身、车内的装饰等)、车辆行进方向、车辆出现时间等与人员信息中的人脸特征信息、服装颜色、服装样式、发型、发色、身体附属物(如手表、背包、眼睛等)、人员行进方向、人员出现时间等建立关联。Specifically, when there is an association between the vehicle in the vehicle identification frame and the person in the person identification frame, the operation of associating the vehicle information and the personnel information in step 105 is specifically to associate the license plate number and model of the vehicle information. , vehicle color, vehicle brand, vehicle accessories (such as body, interior decoration, etc.), vehicle travel direction, vehicle appearance time, etc. and face feature information in personnel information, clothing color, clothing style, hairstyle, hair color, Physical attachments (such as watches, backpacks, eyes, etc.), the direction of travel of the person, the time of the person's appearance, etc. are associated.
即,在实际应用中,只要提取出上述车辆信息中的任一或任意几个,就可以将全部的车辆信息,以及存在关联的人员的人员信息全部查找出来,从而大大缩短了检索和筛选时间,进而在安防场景,如以人找车或以车找人的场景中,大大降低了查找耗时和人力物力的浪费。That is, in practical applications, as long as any one or any of the above vehicle information is extracted, all vehicle information and personnel information of related personnel can be found out, thereby greatly shortening the retrieval and screening time. , and then in security scenarios, such as people looking for cars or cars looking for people, the time-consuming search and the waste of manpower and material resources are greatly reduced.
比如,对于某一人员张三,有一辆车牌号为“123456”的黑色奥迪Q5,如果在2020年9月14日下午2点张三驾驶这辆车出现在了地点A,则只要输入车牌号“123456”,或者张三的身份证号,或者他的生物特征信息,便可以获得如下人车关联信息:身份证号码为XXXX的张三驾驶车牌号为123456的黑色奥迪Q5在2020年9月14日下午2点出现在了地点A。For example, for a certain person Zhang San, there is a black Audi Q5 with the license plate number "123456". If Zhang San drives this car at location A at 2:00 pm on September 14, 2020, just enter the license plate number "123456", or Zhang San's ID number, or his biometric information, you can get the following person-car association information: Zhang San with ID number XXXX drives a black Audi Q5 with license plate number 123456 in September 2020 Appeared at location A at 2 pm on the 14th.
应当理解的是,上述示例仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。It should be understood that the above examples are only examples listed for better understanding of the technical solutions of the present embodiment, and are not used as the only limitation on the present embodiment.
在实际应用中,可以根据使用场景,合理提取车辆信息和人员信息,并根据需要构建相应地的系统,比如智慧安防系统,以为相关工作人员提供更加便捷的服务。In practical applications, vehicle information and personnel information can be reasonably extracted according to usage scenarios, and corresponding systems, such as smart security systems, can be built as needed to provide more convenient services for relevant staff.
通过上述描述不难发现,本实施例提供的人车信息关联方法,通过从待处理视频中标定出的车辆识别框和人员识别框进行特征提取,有效避免了背景因素的干扰,从而保证了提取出的车辆信息和人员信息的准确性。It is not difficult to find from the above description that the method for associating person and vehicle information provided in this embodiment effectively avoids the interference of background factors by extracting features from the vehicle identification frame and the person identification frame demarcated in the video to be processed, thereby ensuring the extraction of features. The accuracy of the vehicle information and personnel information released.
此外,通过根据标定的车辆识别框和人员识别框来判断对应的车辆和人员是否存在关联,并在确定两者存在关联时,将孤立的车辆信息和人员信息进行关联,从而实现了人车信息关联,在后续以车找人或以人找车的应用场景中,大大减低了查找耗时和人力物力的浪费。In addition, by judging whether the corresponding vehicle and person are related according to the calibrated vehicle identification frame and person identification frame, and when it is determined that the two are related, the isolated vehicle information and personnel information are associated, so as to realize the human-vehicle information. In the subsequent application scenarios of finding people by cars or finding cars by people, the time-consuming search and the waste of manpower and material resources are greatly reduced.
本申请的第二实施例涉及一种人车信息关联方法。第二实施例主要是针对第一实施例中所说的根据比较结果确定车辆识别框与人员识别框的位置关系的一种具体适用场景,为了便于理解和说明,以下结合图2至图5进行说明。The second embodiment of the present application relates to a method for associating person and vehicle information. The second embodiment is mainly aimed at a specific application scenario in which the positional relationship between the vehicle identification frame and the person identification frame is determined according to the comparison result mentioned in the first embodiment. illustrate.
具体的说,由于用于确定车辆识别框与人员识别框的位置关系的比较结果,是通过将车辆识别框的顶点坐标值L 1与人员识别框的顶点坐标值L 2进行比较,将车辆识别框的顶点坐标值R 1与人员识别框的顶点坐标值R 2进行比较后得到的。因而,会存在以下几种比较结果:①顶点坐标值L 1大于顶点坐标值L 2,顶点坐标值R 1大于顶点坐标值R 2;②顶点坐标值L 1大于顶点坐标值L 2,顶点坐标值R 1等于顶点坐标值R 2;③顶点坐标值L 1大于顶点坐标值L 2,顶点坐标值R 1小于顶点坐标值R 2;④顶点坐标值L 1等于顶点坐标值L 2,顶点坐标值R 1大于顶点坐标值R 2;⑤顶点坐标值L 1等于顶点坐标值L 2,顶点坐标值R 1等于顶点坐标值R 2;⑥顶点坐标值L 1等于顶点坐标值L 2,顶点坐标值R 1小于顶点坐标值R 2;⑦顶点坐标值L 1小于顶点坐标值L 2,顶点坐标值R 1大于顶点坐标值R 2;⑧顶点坐标值L 1小于顶点坐标值L 2,顶点坐标值R 1等于顶点坐标值R 2;⑨顶点坐标值L 1小于顶点坐标值L 2,顶点坐标值R 1小于顶点坐标值R 2Specifically, since the comparison result used to determine the positional relationship between the vehicle identification frame and the person identification frame is performed by comparing the vertex coordinate value L1 of the vehicle identification frame with the vertex coordinate value L2 of the person identification frame, The vertex coordinate value R 1 of the frame is obtained by comparing the vertex coordinate value R 2 of the person identification frame. Therefore, there are the following comparison results: ① The vertex coordinate value L 1 is greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is greater than the vertex coordinate value R 2 ; ② The vertex coordinate value L 1 is greater than the vertex coordinate value L 2 , and the vertex coordinate value The value R 1 is equal to the vertex coordinate value R 2 ; ③ the vertex coordinate value L 1 is greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is smaller than the vertex coordinate value R 2 ; ④ The vertex coordinate value L 1 is equal to the vertex coordinate value L 2 , and the vertex coordinate value Value R 1 is greater than vertex coordinate value R 2 ; ⑤ vertex coordinate value L 1 is equal to vertex coordinate value L 2 , vertex coordinate value R 1 is equal to vertex coordinate value R 2 ; ⑥ vertex coordinate value L 1 is equal to vertex coordinate value L 2 , vertex coordinate value Value R 1 is less than vertex coordinate value R 2 ; ⑦ vertex coordinate value L 1 is less than vertex coordinate value L 2 , vertex coordinate value R 1 is greater than vertex coordinate value R 2 ; ⑧ vertex coordinate value L 1 is less than vertex coordinate value L 2 , vertex coordinate value The value R 1 is equal to the vertex coordinate value R 2 ; ⑨ the vertex coordinate value L 1 is smaller than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is smaller than the vertex coordinate value R 2 .
通过上述九种比较结果可知,只有在情况为④、⑤、⑦、⑧,即顶点坐标值L 1不大于顶点坐标值L 2,且顶点坐标值R 1不小于顶点坐标值R 2的时候,车辆识别框和人员识别框的位置关系是包含关系,具体为车辆识别框包含人员识别框。 It can be seen from the above nine comparison results that only when the cases are ④, ⑤, ⑦, ⑧, that is, the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not less than the vertex coordinate value R 2 , when The positional relationship between the vehicle identification frame and the person identification frame is an inclusion relationship, specifically, the vehicle identification frame includes the personnel identification frame.
为了便于理解,本实施例以情况⑦为例,车辆识别框与人员识别框的位置关系如图2所示。For ease of understanding, this embodiment takes case ⑦ as an example, and the positional relationship between the vehicle identification frame and the person identification frame is shown in FIG. 2 .
相应地,在情况为②、③、⑤、⑥,即顶点坐标值L 1不小于顶点坐标值L 2,且顶点坐标值R 1不大于顶点坐标值R 2的时候,车辆识别框和人员识别框的位置关系是包含关系,具体为人员识别框包含车辆识别框。 Correspondingly, when the situation is ②, ③, ⑤, ⑥, that is, the vertex coordinate value L 1 is not less than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , the vehicle identification frame and the person are identified. The positional relationship of the boxes is an inclusion relationship, specifically, the person identification box includes the vehicle identification box.
为了便于理解,本实施例以情况③为例,车辆识别框与人员识别框的位置关系如图3所示。For ease of understanding, this embodiment takes case ③ as an example, and the positional relationship between the vehicle identification frame and the person identification frame is shown in FIG. 3 .
应当理解的是,对于上述情况⑤,即车辆识别框与人员识别框完全重叠的情况,实质上也是一种包含关系。It should be understood that, for the above situation ⑤, that is, the situation in which the vehicle identification frame and the person identification frame completely overlap, it is also an inclusion relationship in essence.
相应地,在情况为①和⑨时,可以确定车辆识别框和人员识别框的位置关 系是非包含关系。Correspondingly, in cases ① and ⑨, it can be determined whether the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship.
对于根据上述给出的九种情况来确定车辆识别框和人员识别框的位置关系的操作,在实际应用中,可以通过如下伪代码实现:For the operation of determining the positional relationship between the vehicle identification frame and the person identification frame according to the nine situations given above, in practical applications, it can be implemented by the following pseudocode:
Figure PCTCN2021118538-appb-000001
Figure PCTCN2021118538-appb-000001
应当理解的是,上述示例仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。It should be understood that the above examples are only examples listed for better understanding of the technical solutions of the present embodiment, and are not used as the only limitation on the present embodiment.
此外,值得一提的是,通常情况下应该是车辆识别框大于人员识别框,即人是位于车上的,可能是驾驶员也可能是乘客,即图2给出的情况。但在对待处理视频的实际处理过程中,可能会存在人位于车辆外,但与车辆的距离太近,比如紧贴车辆,此时圈出的人员识别框往往会将车辆也圈进去,因此就会出现人员识别框大于车辆识别框,即图3给出的情况。In addition, it is worth mentioning that in general, the vehicle identification frame should be larger than the person identification frame, that is, the person is located in the car, which may be the driver or the passenger, that is, the situation given in Figure 2. However, in the actual processing process of the video to be processed, there may be people outside the vehicle, but the distance from the vehicle is too close, such as close to the vehicle, the person identification frame circled at this time often circles the vehicle. It will appear that the person identification frame is larger than the vehicle identification frame, that is, the situation given in Figure 3.
进一步地,在确定车辆识别框和人员识别框的位置关系是非包含关系之后,即通过比较,得到的比较结果为上述给出的情况①和⑨时,还可以通过如下两种判断逻辑进一步判断出现在同一图像帧中的车辆和人员之间是否存在关联关系。Further, after determining whether the positional relationship between the vehicle identification frame and the personnel identification frame is a non-inclusive relationship, that is, through comparison, the obtained comparison results are the situations ① and ⑨ given above, and the following two judgment logics can be used to further judge the occurrence of Whether there is an association between vehicles and people in the same image frame.
方式1:判断顶点坐标值L 1是否不大于顶点坐标值L 2,且顶点坐标值R 1是否不大于顶点坐标值R 2,且顶点坐标值L 2是否小于顶点坐标值R 1Method 1: Determine whether the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and whether the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and whether the vertex coordinate value L 2 is less than the vertex coordinate value R 1 .
相应地,若顶点坐标值L 1不大于顶点坐标值L 2,且顶点坐标值R 1不大于顶点坐标值R 2,且顶点坐标值L 2小于顶点坐标值R 1,则确定车辆识别框和人员识别框的位置关系是重叠关系,即图4所示的情况;否则,即车辆识别框和人员识别框完全分离,两者不存在任何重叠区域,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。 Correspondingly, if the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and the vertex coordinate value L 2 is less than the vertex coordinate value R 1 , the vehicle identification frame and the vertex coordinate value are determined. The positional relationship of the personnel identification frame is an overlapping relationship, that is, the situation shown in Figure 4; otherwise, the vehicle identification frame and the personnel identification frame are completely separated, and there is no overlapping area between the two, and the positional relationship between the vehicle identification frame and the personnel identification frame is determined. Is a non-inclusive and non-overlapping relationship.
方式2:判断顶点坐标值L 2是否不大于顶点坐标值L 1,且顶点坐标值R 2是否不大于顶点坐标值R 1,且顶点坐标值L 1是否小于顶点坐标值R 2Method 2: Determine whether the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and whether the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and whether the vertex coordinate value L 1 is less than the vertex coordinate value R 2 .
相应地,若顶点坐标值L 2不大于顶点坐标值L 1,且顶点坐标值R 2不大于顶点坐标值R 1,且顶点坐标值L 1小于顶点坐标值R 2,则确定车辆识别框和人员识别框的位置关系是重叠关系,即图5所示的情况;否则,即车辆识别框和人员识别框完全分离,两者不存在任何重叠区域,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。 Correspondingly, if the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and the vertex coordinate value L 1 is less than the vertex coordinate value R 2 , the vehicle identification frame and the vertex coordinate value are determined. The positional relationship of the personnel identification frame is an overlapping relationship, that is, the situation shown in Figure 5; otherwise, the vehicle identification frame and the personnel identification frame are completely separated, and there is no overlapping area between the two, and the positional relationship between the vehicle identification frame and the personnel identification frame is determined. Is a non-inclusive and non-overlapping relationship.
此外,值得一提的是,为了尽可能保证判断结果的准确性,在确定车辆识别框和人员识别框的位置关系是重叠关系之前,还可以判断一下重叠面积是否满足预设重叠条件。In addition, it is worth mentioning that, in order to ensure the accuracy of the judgment results as much as possible, before determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, it is also possible to judge whether the overlapping area satisfies the preset overlapping conditions.
相应地,若满足,则确定车辆识别框和人员识别框的位置关系是重叠关系;否则,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。Correspondingly, if it is satisfied, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship; otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
关于上述所说的判断重叠面积是否满足预设重叠条件的操作,针对方式1和方式2的逻辑大致相同,只是确定重叠面积和参考面积时所依据的顶点坐标值不相同。Regarding the above-mentioned operation of judging whether the overlapping area satisfies the preset overlapping condition, the logic for Mode 1 and Mode 2 is roughly the same, except that the vertex coordinate values on which the overlapping area and the reference area are determined are different.
具体而言,对于方式1,具体实现如下:Specifically, for Mode 1, the specific implementation is as follows:
首先,根据顶点坐标值L 2和顶点坐标值R 1,确定重叠面积,如图4所示的阴影部分;根据顶点坐标值L 1和顶点坐标值R 2,确定参考面积,如图4所示的虚线框部分。 First, according to the vertex coordinate value L 2 and the vertex coordinate value R 1 , determine the overlapping area, as shown in the shaded part in Figure 4 ; according to the vertex coordinate value L 1 and the vertex coordinate value R 2 , determine the reference area, as shown in Figure 4 part of the dashed box.
然后,判断重叠面积占参考面积的比值是否大于第一预设阈值。Then, it is determined whether the ratio of the overlapping area to the reference area is greater than the first preset threshold.
最后,若大于,则确定车辆识别框和人员识别框的位置关系是重叠关系;否则,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。Finally, if it is greater than that, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship; otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
对于方式2,具体实现如下:For mode 2, the specific implementation is as follows:
首先,根据顶点坐标值R 2和顶点坐标值L 1,确定重叠面积,如图5所示的阴影部分;根据顶点坐标值L 2和顶点坐标值R 1,确定参考面积,如图5所示的虚线框部分。 First, according to the vertex coordinate value R 2 and the vertex coordinate value L 1 , determine the overlapping area, as shown in the shaded part in Fig. 5 ; according to the vertex coordinate value L 2 and the vertex coordinate value R 1 , determine the reference area, as shown in Fig. 5 part of the dashed box.
然后,判断重叠面积占参考面积的比值是否大于第一预设阈值。Then, it is determined whether the ratio of the overlapping area to the reference area is greater than the first preset threshold.
最后,若大于,则确定车辆识别框和人员识别框的位置关系是重叠关系;否则,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。Finally, if it is greater than that, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship; otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
关于上述所说的第一预设阈值,可以根据实际需要由本领域技术人员进行设置,比如将其设置为20%。Regarding the above-mentioned first preset threshold, it can be set by those skilled in the art according to actual needs, for example, it is set to 20%.
对于根据重叠面积进一步判断车辆识别框和人员识别框的位置关系是否为重叠关系的操作,在实际应用中,可以通过如下伪代码实现:For the operation of further judging whether the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship according to the overlapping area, in practical applications, it can be implemented by the following pseudocode:
Figure PCTCN2021118538-appb-000002
Figure PCTCN2021118538-appb-000002
应当理解的是,上述示例仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。It should be understood that the above examples are only examples listed for better understanding of the technical solutions of the present embodiment, and are not used as the only limitation on the present embodiment.
此外,在实际应用中,为了尽可能保证判断结果的准确性,对于上述每一次的判断,可以借助相邻帧对应的视频帧图像来辅助判断,以排除干扰,进而保证最终建立的人车关联结果的准确性。In addition, in practical applications, in order to ensure the accuracy of the judgment results as much as possible, for each of the above judgments, the video frame images corresponding to adjacent frames can be used to assist the judgment to eliminate interference and ensure the final establishment of the human-vehicle association. accuracy of results.
此外,在实际应用中,在根据各顶点坐标值进行上述计算处理时,具体是根据各顶点在X轴和Y轴上的坐标值来进行计算的。In addition, in practical applications, when the above calculation processing is performed according to the coordinate values of each vertex, the calculation is specifically performed according to the coordinate values of each vertex on the X axis and the Y axis.
由此,通过根据上述方式来确定车辆识别框和人员识别框之间的位置关系,从而使得本实施例提供的人车信息关联方法能够适应于人车同框(人位于车辆上)、人车不同框但同画的场景,进而能够更好的将孤立、零散的目标信息(车辆信息和人员信息)分析出相互之间的关联关系,比如某一用户虽然不是某车辆的车主,但是他经常与该车辆同时出现在相同场所,则在后续需要追踪该车 辆的时候,只要获得该用户的出行轨迹便可能追踪到该车辆,从而为相关工作人员提供了更多、更深层、更有价值的线索信息,进而提高了车辆信息和人员信息的实用价值。Therefore, the positional relationship between the vehicle identification frame and the person identification frame is determined according to the above method, so that the person-vehicle information association method provided in this embodiment can be adapted to the same frame as the person and the vehicle (the person is on the vehicle), the person and the vehicle Scenes with different frames but the same painting can better analyze the relationship between isolated and scattered target information (vehicle information and personnel information). For example, although a user is not the owner of a vehicle, he often When the vehicle appears in the same place at the same time, when the vehicle needs to be tracked later, it is possible to track the vehicle as long as the travel trajectory of the user is obtained, thus providing more, deeper and more valuable information for the relevant staff. Clue information, thereby improving the practical value of vehicle information and personnel information.
本申请的第三实施例涉及一种人车信息关联方法。第三实施例主要是针对第二实施例中所说的在确定车辆识别框和人员识别框为非包含且非重叠关系,即车辆识别框和人员识别框之间存在一定距离,确定车辆和人员不存在关联之前的场景,即第三实施例是为进一步从非关联的车辆和人员中筛选出存在关联性的车辆和人员,为了便于理解和说明,以下结合图6进行说明。The third embodiment of the present application relates to a method for associating person and vehicle information. The third embodiment is mainly aimed at determining the non-inclusive and non-overlapping relationship between the vehicle identification frame and the person identification frame as mentioned in the second embodiment, that is, there is a certain distance between the vehicle identification frame and the personnel identification frame. There is no scene before association, that is, the third embodiment is to further filter out related vehicles and personnel from non-associated vehicles and personnel. For ease of understanding and description, the following description is made with reference to FIG. 6 .
如图6所示,在确定车辆和人员不存在关联之前,可以先根据顶点坐标值L 1和顶点坐标值R 1,确定车辆识别框的中心点坐标值C 1;根据顶点坐标值L 2和顶点坐标值R 2,确定人员识别框的中心点坐标值C 2;接着,根据中心点坐标值C 1和中心点坐标值C 2,确定车辆识别框的中心点到人员识别框的中心点的距离D 1;接着,确定视频帧图像的对角线距离D 2;接着,判断距离D 1与对角线距离D 2的比值是否大于第二预设阈值;最后,若大于,则确定车辆和人员存在关联;否则,确定车辆和人员不存在关联。 As shown in FIG. 6 , before it is determined that there is no relationship between the vehicle and the person, the coordinate value C 1 of the center point of the vehicle identification frame can be determined according to the vertex coordinate value L 1 and the vertex coordinate value R 1 ; according to the vertex coordinate value L 2 and The vertex coordinate value R 2 determines the center point coordinate value C 2 of the personnel identification frame; then, according to the center point coordinate value C 1 and the center point coordinate value C 2 , the distance from the center point of the vehicle identification frame to the center point of the personnel identification frame is determined. distance D 1 ; then, determine the diagonal distance D 2 of the video frame image; then, determine whether the ratio of the distance D 1 to the diagonal distance D 2 is greater than the second preset threshold; The person is associated; otherwise, it is determined that the vehicle and the person are not associated.
关于,上述所说的对角线距离D 2在实际应用中可以是根据视频帧图像的左上角的顶点坐标值和右下角的顶点坐标值确定的,也可以是根据视频帧图像的右上角的顶点坐标值和左下角的顶点坐标值确定的,本实施例对此不做限制。 Regarding, the above-mentioned diagonal distance D 2 may be determined according to the vertex coordinate value of the upper left corner of the video frame image and the vertex coordinate value of the lower right corner in practical application, or may be determined according to the upper right corner of the video frame image. The vertex coordinate value and the vertex coordinate value of the lower left corner are determined, which is not limited in this embodiment.
此外,在实际应用中,距离D 1除了可以是根据车辆识别框的中心点坐标值C 1和人员识别框的中心点坐标值C 2确定,还可以是根据车辆识别框上或车辆识别框内,与人员识别框上或人员识别框内的坐标点的坐标值确定,本实施例对此不做限制。 In addition, in practical applications, the distance D 1 can be determined according to the center point coordinate value C 1 of the vehicle identification frame and the center point coordinate value C 2 of the personnel identification frame, and can also be determined according to the vehicle identification frame or the vehicle identification frame. , which is determined with the coordinate value of the coordinate point on the personnel identification frame or in the personnel identification frame, which is not limited in this embodiment.
此外,关于上述所说的第二预设阈值,可以根据实际需要由本领域技术人员进行设置,比如将其设置为30%。In addition, the above-mentioned second preset threshold can be set by those skilled in the art according to actual needs, for example, it is set to 30%.
对于根据距离D 1与对角线距离D 2进一步判断车辆识别框中的和人员识别框的人员是否存在关联的操作,在实际应用中,可以通过如下伪代码实现: For the operation of further judging whether there is a relationship between the person in the vehicle identification frame and the person in the person identification frame according to the distance D1 and the diagonal distance D2 , in practical applications, the following pseudocode can be used to implement:
Figure PCTCN2021118538-appb-000003
Figure PCTCN2021118538-appb-000003
应当理解的是,上述示例仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。It should be understood that the above examples are only examples listed for better understanding of the technical solutions of the present embodiment, and are not used as the only limitation on the present embodiment.
同样,在实际应用中,为了尽可能保证判断结果的准确性,对于上述每一次的判断,可以借助相邻帧对应的视频帧图像来辅助判断,以排除干扰,进而保证最终建立的人车关联结果的准确性。Similarly, in practical applications, in order to ensure the accuracy of the judgment results as much as possible, for each of the above judgments, the video frame images corresponding to adjacent frames can be used to assist the judgment to eliminate interference, thereby ensuring the final establishment of the human-vehicle association. accuracy of results.
由此,在确定车辆识别框和人员识别框为非包含且非重叠关系,即车辆识别框和人员识别框之间存在一定距离,确定车辆和人员不存在关联之前,通过借助距离来判断车辆识别框中的车辆和人员识别框中的人员是否存在关联,从而使得本实施例提供的人车信息关联方法能够更好的适应于人车不同框但同画 的场景,进而能够更好的将孤立、零散的目标信息(车辆信息和人员信息)分析出相互之间的关联关系。Therefore, before it is determined that the vehicle identification frame and the person identification frame are non-inclusive and non-overlapping, that is, there is a certain distance between the vehicle identification frame and the personnel identification frame, and before it is determined that the vehicle and the person are not associated, the vehicle identification is judged by the distance. Whether there is a relationship between the vehicle in the frame and the person in the person identification frame, so that the method for associating the person and vehicle information provided in this embodiment can be better adapted to the scene where the person and vehicle are in different frames but the same picture, and thus can better isolate the , scattered target information (vehicle information and personnel information) to analyze the relationship between them.
本申请的第四实施例涉及一种人车信息关联方法。第四实施例主要是针对第三实施例中所说的在根据距离D 1与对角线距离D 2确定车辆和人员不存在关联,即人车在同一画面,但是不满足距离判断的场景时,即第四实施例是为进一步从非关联的车辆和人员中筛选出存在关联性的车辆和人员,为了便于理解和说明,以下结合图7进行说明。 The fourth embodiment of the present application relates to a method for associating person and vehicle information. The fourth embodiment is mainly aimed at the scene in the third embodiment when it is determined that the vehicle and the person are not related according to the distance D1 and the diagonal distance D2, that is, the person and the vehicle are on the same screen, but the distance judgment is not satisfied. , that is, the fourth embodiment is to further filter out related vehicles and persons from unrelated vehicles and persons. For ease of understanding and description, the following description is made with reference to FIG. 7 .
具体而言,在根据距离D 1与对角线距离D 2确定车辆和人员不存在关联之前,可以先执行如下操作: Specifically, before determining that there is no association between the vehicle and the person according to the distance D 1 and the diagonal distance D 2 , the following operations may be performed first:
(1)对车辆识别框和人员识别框进行扩大,得到背景识别框。(1) Expand the vehicle identification frame and the person identification frame to obtain the background identification frame.
具体的说,上述所说的对车辆识别框和人员识别框进行扩大的操作,具体是将车辆识别框和人员识别框向外扩展至预设大小。Specifically, the above-mentioned operation of expanding the vehicle identification frame and the person identification frame is to expand the vehicle identification frame and the person identification frame outward to a preset size.
此外,在实际应用中,上述所说的背景识别框可以是包括了车辆识别框和人员识别框的;还可以是分别针对车辆识别框和人员识别框进行扩大,进而得到车辆识别框对应的背景识别框和人员识别框对应的背景识别框。In addition, in practical applications, the above-mentioned background identification frame may include a vehicle identification frame and a person identification frame; it may also be expanded for the vehicle identification frame and the personnel identification frame respectively, so as to obtain the background corresponding to the vehicle identification frame. The background identification frame corresponding to the identification frame and the person identification frame.
为了便于处理,本实施例采用第一种方式,即背景识别框是包括了车辆识别框和人员识别框的。In order to facilitate processing, this embodiment adopts the first method, that is, the background recognition frame includes a vehicle recognition frame and a person recognition frame.
(2)对背景识别框进行特征提取,根据提取到的特征确定车辆和人员所处的地点信息。(2) Feature extraction is performed on the background recognition frame, and the location information of vehicles and persons is determined according to the extracted features.
关于对背景识别框进行的特征提取操作,本领域技术人员可以根据需要预先选取合适的机器学习算法构建对应的特征提取模型,进而根据训练获得的特征提取模型对背景识别框进行特征提取,从而提取出符合要求的特征信息。Regarding the feature extraction operation for the background recognition frame, those skilled in the art can pre-select a suitable machine learning algorithm to construct a corresponding feature extraction model as needed, and then perform feature extraction on the background recognition frame according to the feature extraction model obtained by training, so as to extract The characteristic information that meets the requirements.
(3)确定车辆和人员在待处理视频中出现的时间信息。(3) Determine the time information of vehicles and people appearing in the video to be processed.
(4)获取与待处理视频存在关联的关联视频。(4) Acquire an associated video associated with the video to be processed.
具体的说,关联视频为与拍摄待处理视频的摄像头处于同一区域内,但位于不同点位的摄像头拍摄到的视频。Specifically, the associated video is a video captured by a camera located in the same area as the camera that shoots the video to be processed, but located at a different point.
(5)根据地点信息和时间信息,从各点位提供的关联视频中提取关联视频帧图像。(5) According to the location information and time information, extract the associated video frame image from the associated video provided by each point.
对于步骤(5)中所说的从各点位提供的关联视频中提取关联视频帧图像,进而识别出关联视频帧图像中上述所说的车辆和人员的操作,与第一实施例中步骤102中给出的提取方式类似,此处就不再赘述。The operation of extracting the associated video frame image from the associated video provided by each point in step (5), and then identifying the above-mentioned vehicles and personnel in the associated video frame image, is the same as that of step 102 in the first embodiment. The extraction method given in is similar, and will not be repeated here.
(6)对各关联视频帧图像进行遍历,记录出现车辆的点位和出现人员的点位。(6) Traverse each associated video frame image, and record the point where the vehicle appears and the point where the person appears.
相应地,若出现车辆的点位与出现人员的点位的交集大于第三阈值,则确定车辆和人员存在关联;否则,确定车辆和人员不存在关联。Correspondingly, if the intersection of the point where the vehicle appears and the point where the person appears is greater than the third threshold, it is determined that the vehicle and the person are associated; otherwise, it is determined that the vehicle and the person are not associated.
关于上述所说的第三预设阈值,可以根据实际需要由本领域技术人员进行设置,比如将其设置为大于点位数量的一半。The above-mentioned third preset threshold can be set by those skilled in the art according to actual needs, for example, it is set to be greater than half of the number of points.
为了便于理解,以下结合图7进行说明:For ease of understanding, the following description is given in conjunction with Figure 7:
假设需要判断车辆A和人员B之间的关联性,在经过上述第一、第二、第 三实施例给出的判断之后,仍然无法确定车辆A和人员B之间的关联性,则在采用本实施例给出的方式进一步确定时,借助了位于点位1的摄像头拍摄的关联视频1、位于点位2的摄像头拍摄的关联视频2、位于点位3的摄像头拍摄的关联视频3和位于点位4的摄像头拍摄的关联视频4。Assuming that the correlation between vehicle A and person B needs to be judged, after the judgments given in the first, second and third embodiments above, it is still impossible to determine the correlation between vehicle A and person B, then use the When the method given in this embodiment is further determined, the associated video 1 captured by the camera at point 1, the associated video 2 captured by the camera at point 2, the associated video 3 captured by the camera at point 3, and the associated video at Associated video 4 taken by the camera at point 4.
根据上述(2)确定的地点信息和步骤(3)确定的时间信息,分别从关联视频1、关联视频2、关联视频3和关联视频4提取对应的视频帧图像,然后记录出现了车辆的点位和出现人员的点位。According to the location information determined in the above (2) and the time information determined in step (3), the corresponding video frame images are extracted from the associated video 1, associated video 2, associated video 3 and associated video 4, respectively, and then record the point where the vehicle appears. position and the point of presence of personnel.
如图7所示,点位1拍摄的视频帧图像中出现了车辆A和人员B,点位2拍摄的视频帧图像中出现了车辆A和人员B,点位3拍摄的视频帧图像中仅出现了人员B,点位4拍摄的视频帧图像中出现了车辆A和人员B。As shown in Figure 7, vehicle A and person B appear in the video frame image captured at point 1, vehicle A and person B appear in the video frame image captured at point 2, and only in the video frame image captured at point 3 Person B appears, and vehicle A and person B appear in the video frame image captured at point 4.
基于上述规定出现车辆A的点位与出现人员B的点位的交集大于第三阈值,如大于点位数量的一半,则根据上述记录可知,图7中在作为辅助的点位有4个时候,只要出现车辆A的点位与出现人员B的点位的交集大于2,即为3或4就可以认为车辆A和人员B存在关联,即图7给出的示例中车辆A和人员B存在关联。Based on the above regulations, the intersection of the point where the vehicle A appears and the point where the person B appears is greater than the third threshold, if it is greater than half of the number of points, it can be seen from the above records that there are 4 auxiliary points in Figure 7 when there are 4 , as long as the intersection of the point where vehicle A appears and the point where person B appears is greater than 2, that is, 3 or 4, it can be considered that vehicle A and person B are related, that is, in the example given in Figure 7, vehicle A and person B exist association.
此外,值得一提的是,由于单一几次误判的概率较大,为了保证基于上述方式确定的结果的准确性,这种同时出现的次数要求是多次,频繁的。In addition, it is worth mentioning that due to the high probability of a single misjudgment, in order to ensure the accuracy of the result determined based on the above method, the number of such simultaneous occurrences is required to be multiple and frequent.
由此,在人车在同一画面,但是不满足距离判断的场景时,通过借助不同点位的摄像头拍摄的待处理视频来辅助判断当前车辆和人员是否存在关联,从而使得本实施例提供的人车信息关联方法能够更好的适应于人车在同一画面,但是不满足距离判断的场景,即从该场景中尽可能的筛选出存在关联的车辆和人员,进而能够更好的将孤立、零散的目标信息(车辆信息和人员信息)分析出相互之间的关联关系。Therefore, when the person and the vehicle are on the same screen, but the distance judgment is not satisfied, the to-be-processed video shot by the cameras at different points is used to assist in determining whether the current vehicle and the person are related, so that the person provided by this embodiment is provided. The vehicle information association method can be better adapted to the scene where people and vehicles are in the same picture, but does not meet the distance judgment, that is, the related vehicles and people can be screened out from the scene as much as possible, so as to better isolate and scattered. The target information (vehicle information and personnel information) is analyzed to analyze the relationship between them.
此外,应当理解的是,上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。In addition, it should be understood that the division of steps of the various methods above is only for the purpose of describing clearly, and can be combined into one step or split into some steps during implementation, and decomposed into multiple steps, as long as the same logical relationship is included, all Within the protection scope of this patent; adding insignificant modifications to the algorithm or process or introducing insignificant designs, but not changing the core design of the algorithm and process are all within the protection scope of this patent.
本申请第五实施例涉及一种人车信息关联装置,如图8所示,包括:获取模块801、识别模块802、提取模块803、判断模块804和关联模块805。The fifth embodiment of the present application relates to a device for associating person and vehicle information, as shown in FIG.
其中,获取模块801,用于获取待处理视频;识别模块802,用于识别待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框;提取模块803,用于分别对车辆识别框和人员识别框进行特征提取,得到车辆信息和人员信息;判断模块804,用于根据车辆识别框和人员识别框,判断车辆和人员是否存在关联;关联模块805,用于在车辆和人员存在关联是,将车辆信息和人员信息进行关联。Among them, the acquisition module 801 is used to acquire the video to be processed; the identification module 802 is used to identify the vehicles and people included in the video to be processed, and obtain the vehicle identification frame and the personnel identification frame; the extraction module 803 is used to respectively identify the vehicle identification frame Perform feature extraction with the personnel identification frame to obtain vehicle information and personnel information; the judgment module 804 is used to judge whether the vehicle and personnel are related according to the vehicle identification frame and the personnel identification frame; the association module 805 is used for the existence of association between vehicles and personnel. Yes, associate vehicle information with personnel information.
此外,在另一个例子中,识别模块803在识别待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框时,具体为:In addition, in another example, when the identification module 803 identifies the vehicles and people included in the video to be processed, and obtains the vehicle identification frame and the personnel identification frame, the specific steps are:
从待处理视频中提取视频帧图像;Extract video frame images from the video to be processed;
对视频帧图像进行车辆检测,并在检测到车辆时,在视频帧图像中标定视频帧图像中出现的所有车辆,得到N个车辆识别框,N为大于0的整数;Vehicle detection is performed on the video frame image, and when a vehicle is detected, all vehicles appearing in the video frame image are calibrated in the video frame image, and N vehicle identification frames are obtained, where N is an integer greater than 0;
对视频帧图像进行人脸人形检测,并在检测到人员时,在视频帧图像中标定视频帧图像中出现的所有人员,得到M个人员识别框,M为大于0的整数。Perform face and human shape detection on the video frame image, and when a person is detected, all persons appearing in the video frame image are calibrated in the video frame image, and M person identification frames are obtained, where M is an integer greater than 0.
此外,在另一个例子中,人车信息关联装置还可以包括组合模块。In addition, in another example, the device for associating person-vehicle information may further include a combination module.
具体而言,组合模块,用于对N个车辆识别框和M个人员识别框进行组合,得到N×M个人车组合。Specifically, the combination module is used to combine the N vehicle identification frames and the M personnel identification frames to obtain an N×M personal vehicle combination.
相应地,判断模块804所执行的操作适应于每一个人车组合。即,判断模块804通过遍历N×M个人车组合,对于遍历到的每一个人车组合,执行上述判断操作。Accordingly, the operations performed by the determination module 804 are adapted to each person-vehicle combination. That is, the judging module 804 performs the above-mentioned judging operation for each traversed personal-vehicle combination by traversing N×M personal-vehicle combinations.
此外,在另一个例子中,判断模块804在根据车辆识别框和人员识别框,判断车辆和人员是否存在关联时,具体为:In addition, in another example, when the judging module 804 judges whether there is a relationship between the vehicle and the person according to the vehicle identification frame and the person identification frame, it is specifically:
获取车辆识别框的坐标信息和人员识别框的坐标信息,得到车辆坐标信息和人员坐标信息;Obtain the coordinate information of the vehicle identification frame and the coordinate information of the personnel identification frame, and obtain the vehicle coordinate information and the personnel coordinate information;
根据车辆坐标信息和人员坐标信息,判断车辆和人员是否存在关联。According to the vehicle coordinate information and the personnel coordinate information, it is determined whether there is a relationship between the vehicle and the person.
此外,在另一个例子中,车辆坐标信息至少包括车辆识别框左上角的顶点坐标值L 1和右下角的顶点坐标值R 1,人员坐标信息至少包括人员识别框左上角的顶点坐标值L 2和右下角的顶点坐标值R 2In addition, in another example, the vehicle coordinate information includes at least the vertex coordinate value L 1 of the upper left corner of the vehicle identification frame and the vertex coordinate value R 1 of the lower right corner, and the personnel coordinate information at least includes the vertex coordinate value L 2 of the upper left corner of the personnel identification frame. and the vertex coordinate value R 2 of the lower right corner.
相应地,判断模块804在根据车辆坐标信息和人员坐标信息,判断车辆和人员是否存在关联时,具体为:Correspondingly, when the judgment module 804 judges whether the vehicle and the person are related according to the vehicle coordinate information and the personnel coordinate information, it is specifically:
将顶点坐标值L 1与顶点坐标值L 2进行比较,将顶点坐标值R 1与顶点坐标值R 2进行比较; Compare the vertex coordinate value L 1 with the vertex coordinate value L 2 , and compare the vertex coordinate value R 1 with the vertex coordinate value R 2 ;
根据比较结果确定车辆识别框与人员识别框的位置关系;Determine the positional relationship between the vehicle identification frame and the person identification frame according to the comparison result;
根据位置关系,判断车辆和人员是否存在关联。According to the positional relationship, it is judged whether there is a relationship between the vehicle and the person.
此外,在另一个例子中,判断模块804在根据比较结果确定车辆识别框与人员识别框的位置关系时,具体为:In addition, in another example, when the determination module 804 determines the positional relationship between the vehicle identification frame and the person identification frame according to the comparison result, it is specifically:
若顶点坐标值L 1不大于顶点坐标值L 2,且顶点坐标值R 1不小于顶点坐标值R 2,或者,若顶点坐标值L 1不小于顶点坐标值L 2,且顶点坐标值R 1不大于顶点坐标值R 2,则确定车辆识别框和人员识别框的位置关系是包含关系; If the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not less than the vertex coordinate value R 2 , or, if the vertex coordinate value L 1 is not less than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an inclusive relationship;
否则,确定车辆识别框和人员识别框的位置关系是非包含关系。Otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship.
此外,在另一个例子中,在确定车辆识别框和人员识别框的位置关系是非包含关系之后,判断模块804还用于执行如下操作:In addition, in another example, after determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, the judging module 804 is further configured to perform the following operations:
判断顶点坐标值L 1是否不大于顶点坐标值L 2,且顶点坐标值R 1是否不大于顶点坐标值R 2,且顶点坐标值L 2是否小于顶点坐标值R 1Determine whether the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and whether the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and whether the vertex coordinate value L 2 is less than the vertex coordinate value R 1 ;
若顶点坐标值L 1不大于顶点坐标值L 2,且顶点坐标值R 1不大于顶点坐标值R 2,且顶点坐标值L 2小于顶点坐标值R 1,则确定车辆识别框和人员识别框的位置关系是重叠关系; If the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and the vertex coordinate value L 2 is less than the vertex coordinate value R 1 , the vehicle identification frame and the person identification frame are determined. The positional relationship of is an overlapping relationship;
否则,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。Otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
此外,在另一个例子中,确定车辆识别框和人员识别框的位置关系是重叠 关系之前,判断模块804还用于执行如下操作:In addition, in another example, before determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, the judgment module 804 is further configured to perform the following operations:
根据顶点坐标值L 2和顶点坐标值R 1,确定重叠面积; According to the vertex coordinate value L 2 and the vertex coordinate value R 1 , determine the overlapping area;
根据顶点坐标值L 1和顶点坐标值R 2,确定参考面积; Determine the reference area according to the vertex coordinate value L 1 and the vertex coordinate value R 2 ;
判断重叠面积占参考面积的比值是否大于第一预设阈值;judging whether the ratio of the overlapping area to the reference area is greater than a first preset threshold;
相应地,若大于,则执行确定车辆识别框和人员识别框的位置关系是重叠关系的步骤;否则,执行确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系的步骤。Correspondingly, if it is greater than, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship is performed; otherwise, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
此外,在另一个例子中,在确定车辆识别框和人员识别框的位置关系是非包含关系之后,判断模块804还用于执行如下操作:In addition, in another example, after determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, the judging module 804 is further configured to perform the following operations:
判断顶点坐标值L 2是否不大于顶点坐标值L 1,且顶点坐标值R 2是否不大于顶点坐标值R 1,且顶点坐标值L 1是否小于顶点坐标值R 2Determine whether the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and whether the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and whether the vertex coordinate value L 1 is less than the vertex coordinate value R 2 ;
若顶点坐标值L 2不大于顶点坐标值L 1,且顶点坐标值R 2不大于顶点坐标值R 1,且顶点坐标值L 1小于顶点坐标值R 2,则确定车辆识别框和人员识别框的位置关系是重叠关系; If the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and the vertex coordinate value L 1 is less than the vertex coordinate value R 2 , the vehicle identification frame and the person identification frame are determined. The positional relationship of is an overlapping relationship;
否则,确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系。Otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
此外,在另一个例子中,在定车辆识别框和人员识别框的位置关系是重叠关系之前,判断模块804还用于执行如下操作:In addition, in another example, before the positional relationship between the fixed vehicle identification frame and the person identification frame is an overlapping relationship, the judgment module 804 is further configured to perform the following operations:
根据顶点坐标值R 2和顶点坐标值L 1,确定重叠面积; According to the vertex coordinate value R 2 and the vertex coordinate value L 1 , determine the overlapping area;
根据顶点坐标值L 2和顶点坐标值R 1,确定参考面积; Determine the reference area according to the vertex coordinate value L 2 and the vertex coordinate value R 1 ;
判断重叠面积占参考面积的比值是否大于第一预设阈值;judging whether the ratio of the overlapping area to the reference area is greater than a first preset threshold;
相应地,若大于,则执行确定车辆识别框和人员识别框的位置关系是重叠关系的步骤;否则,执行确定车辆识别框和人员识别框的位置关系是非包含且非重叠关系的步骤。Correspondingly, if it is greater than, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship is performed; otherwise, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
此外,在另一个例子中,关联模块805在根据位置关系,判断车辆和人员是否存在关联时,具体为:In addition, in another example, when the association module 805 determines whether there is an association between the vehicle and the person according to the positional relationship, it is specifically:
在位置关系是包含关系时,确定车辆和人员存在关联;When the location relationship is an inclusive relationship, it is determined that the vehicle and the person are related;
在位置关系是重叠关系时,确定车辆和人员存在关联;When the positional relationship is an overlapping relationship, it is determined that the vehicle and the person are related;
在位置关系是非包含且非重叠关系时,确定车辆和人员不存在关联。When the positional relationship is a non-inclusive and non-overlapping relationship, it is determined that there is no association between the vehicle and the person.
此外,在另一个例子中,在确定车辆和人员不存在关联之前,判断模块804还用于执行如下操作:In addition, in another example, before determining that there is no association between the vehicle and the person, the judging module 804 is further configured to perform the following operations:
根据顶点坐标值L 1和顶点坐标值R 1,确定车辆识别框的中心点坐标值C 1Determine the coordinate value C 1 of the center point of the vehicle identification frame according to the vertex coordinate value L 1 and the vertex coordinate value R 1 ;
根据顶点坐标值L 2和顶点坐标值R 2,确定人员识别框的中心点坐标值C 2Determine the coordinate value C 2 of the center point of the personnel identification frame according to the vertex coordinate value L 2 and the vertex coordinate value R 2 ;
根据中心点坐标值C 1和中心点坐标值C 2,确定车辆识别框的中心点到人员识别框的中心点的距离D 1According to the center point coordinate value C 1 and the center point coordinate value C 2 , determine the distance D 1 from the center point of the vehicle identification frame to the center point of the person identification frame;
确定视频帧图像的对角线距离D 2determining the diagonal distance D 2 of the video frame image;
判断距离D 1与对角线距离D 2的比值是否大于第二预设阈值; Determine whether the ratio of the distance D 1 to the diagonal distance D 2 is greater than a second preset threshold;
相应地,若大于,则确定车辆和人员存在关联;否则,执行确定车辆和人员不存在关联的步骤。Correspondingly, if it is greater than that, it is determined that the vehicle and the person are associated; otherwise, the step of determining that the vehicle and the person are not associated is performed.
此外,在另一个例子中,在确定车辆和人员不存在关联之前,判断模块804 还用于执行如下操作:In addition, in another example, before determining that there is no association between the vehicle and the person, the judgment module 804 is further configured to perform the following operations:
对车辆识别框和人员识别框进行扩大,得到背景识别框;Expand the vehicle identification frame and the person identification frame to obtain the background identification frame;
对背景识别框进行特征提取,根据提取到的特征确定车辆和人员所处的地点信息;Perform feature extraction on the background recognition frame, and determine the location information of vehicles and personnel according to the extracted features;
确定车辆和人员在待处理视频中出现的时间信息;Determine when vehicles and people appear in the video to be processed;
获取与待处理视频存在关联的关联视频,关联视频为与拍摄待处理视频的摄像头处于同一区域内,但位于不同点位的摄像头拍摄到的视频;Obtain an associated video associated with the video to be processed, where the associated video is a video captured by a camera located in the same area as the camera that shoots the video to be processed, but located at a different point;
根据地点信息和时间信息,从各点位提供的关联视频中提取关联视频帧图像;According to the location information and time information, extract the associated video frame image from the associated video provided by each point;
对各关联视频帧图像进行遍历,记录出现车辆的点位和出现人员的点位;Traverse each associated video frame image, and record the point where the vehicle appears and the point where the person appears;
若出现车辆的点位与出现人员的点位的交集大于第三阈值,则确定车辆和人员存在关联;If the intersection of the point where the vehicle appears and the point where the person appears is greater than the third threshold, it is determined that the vehicle and the person are associated;
否则,执行确定车辆和人员不存在关联的步骤。Otherwise, the steps of determining that there is no association between the vehicle and the person are performed.
不难发现,本实施例为与第一,或第二,或第三,或第四实施例相对应的装置实施例,本实施例可与第一,或第二,或第三,或第四实施例互相配合实施。第一,或第二,或第三,或第四实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在第一,或第二,或第三,或第四实施例中。It is not difficult to find that this embodiment is a device embodiment corresponding to the first, or the second, or the third, or the fourth embodiment, and this embodiment may be the same as the first, or the second, or the third, or the The four embodiments are implemented in cooperation with each other. The related technical details mentioned in the first, or the second, or the third, or the fourth embodiment are still valid in this embodiment, and are not repeated here in order to reduce repetition. Correspondingly, the related technical details mentioned in this embodiment can also be applied to the first, or the second, or the third, or the fourth embodiment.
值得一提的是,本实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本申请的创新部分,本实施例中并没有将与解决本申请所提出的技术问题关系不太密切的单元引入,但这并不表明本实施例中不存在其它的单元。It is worth mentioning that all the modules involved in this embodiment are logical modules. In practical applications, a logical unit may be a physical unit, a part of a physical unit, or multiple physical units. A composite implementation of the unit. In addition, in order to highlight the innovative part of the present application, this embodiment does not introduce units that are not closely related to solving the technical problem raised by the present application, but this does not mean that there are no other units in this embodiment.
本申请的第六实施例涉及一种人车信息关联设备,如图9所示,包括:包括至少一个处理器901;以及,与至少一个处理器901通信连接的存储器902;其中,存储器902存储有可被至少一个处理器901执行的指令,指令被至少一个处理器901执行,以使至少一个处理器901能够执行上述方法实施例所描述的人车信息关联方法。The sixth embodiment of the present application relates to a person-vehicle information association device, as shown in FIG. 9 , comprising: at least one processor 901 ; and a memory 902 communicatively connected to the at least one processor 901 ; wherein, the memory 902 stores There are instructions executable by the at least one processor 901, and the instructions are executed by the at least one processor 901, so that the at least one processor 901 can execute the method for associating person-vehicle information described in the above method embodiments.
其中,存储器902和处理器901采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器901和存储器902的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器901处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器901。The memory 902 and the processor 901 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more processors 901 and various circuits of the memory 902 together. The bus may also connect together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein. The bus interface provides the interface between the bus and the transceiver. A transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other devices over a transmission medium. The data processed by the processor 901 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor 901 .
处理器901负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器902可以被用于存储处理器901在执行操作时所使用的数据。 Processor 901 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management, and other control functions. The memory 902 may be used to store data used by the processor 901 when performing operations.
本申请的第七实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例所描述的人车信息关联方法。A seventh embodiment of the present application relates to a computer-readable storage medium storing a computer program. When the computer program is executed by the processor, the method for associating person-vehicle information described in the above method embodiments is implemented.
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。That is, those skilled in the art can understand that all or part of the steps in the method of implementing the above embodiments can be completed by instructing the relevant hardware through a program, and the program is stored in a storage medium and includes several instructions to make a device ( It may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods of the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。Those of ordinary skill in the art can understand that the above-mentioned embodiments are specific embodiments for realizing the present application, and in practical applications, various changes in form and details can be made without departing from the spirit and the spirit of the present application. Scope.

Claims (16)

  1. 一种人车信息关联方法,包括:A human-vehicle information association method, comprising:
    获取待处理视频;Get the pending video;
    识别所述待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框;Identify vehicles and people included in the video to be processed, and obtain a vehicle identification frame and a person identification frame;
    分别对所述车辆识别框和所述人员识别框进行特征提取,得到车辆信息和人员信息;Perform feature extraction on the vehicle identification frame and the person identification frame respectively to obtain vehicle information and personnel information;
    根据所述车辆识别框和所述人员识别框,判断所述车辆和所述人员是否存在关联;According to the vehicle identification frame and the person identification frame, determine whether the vehicle and the person are associated;
    若存在关联,则将所述车辆信息和所述人员信息进行关联。If there is an association, the vehicle information and the person information are associated.
  2. 如权利要求1所述的人车信息关联方法,其中,识别所述待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框,包括:The method for associating human-vehicle information as claimed in claim 1, wherein identifying vehicles and persons included in the video to be processed to obtain a vehicle identification frame and a person identification frame, comprising:
    从所述待处理视频中提取视频帧图像;Extract video frame images from the video to be processed;
    对所述视频帧图像进行车辆检测,并在检测到车辆时,在所述视频帧图像中标定所述视频帧图像中出现的所有车辆,得到N个车辆识别框,N为大于0的整数;Perform vehicle detection on the video frame image, and when a vehicle is detected, demarcate all the vehicles appearing in the video frame image in the video frame image, and obtain N vehicle identification frames, where N is an integer greater than 0;
    对所述视频帧图像进行人脸人形检测,并在检测到人员时,在所述视频帧图像中标定所述视频帧图像中出现的所有人员,得到M个人员识别框,M为大于0的整数。Perform face and human shape detection on the video frame image, and when a person is detected, all personnel appearing in the video frame image are demarcated in the video frame image, and M personnel identification frames are obtained, where M is greater than 0. Integer.
  3. 如权利要求2所述的人车信息关联方法,其中,在所述根据所述车辆识别框和所述人员识别框,判断所述车辆和所述人员是否存在关联之前,所述方法还包括:The method for associating human-vehicle information according to claim 2, wherein, before judging whether the vehicle and the person are associated according to the vehicle identification frame and the person identification frame, the method further comprises:
    对N个车辆识别框和M个人员识别框进行组合,得到N×M个人车组合;Combining N vehicle identification frames and M personnel identification frames to obtain N×M personal vehicle combinations;
    遍历N×M个人车组合,对于遍历到的每一个人车组合,执行所述根据所述车辆识别框和所述人员识别框,判断所述车辆和所述人员是否存在关联的步骤。Traverse N×M personal-vehicle combinations, and for each traversed personal-vehicle combination, perform the step of judging whether the vehicle and the person are associated according to the vehicle identification frame and the person identification frame.
  4. 如权利要求1至3中任一项所述的人车信息关联方法,其中,所述根据所述车辆识别框和所述人员识别框,判断所述车辆和所述人员是否存在关联,包括:The method for associating person-vehicle information according to any one of claims 1 to 3, wherein the determining whether the vehicle and the person are associated according to the vehicle identification frame and the person identification frame includes:
    获取所述车辆识别框的坐标信息和所述人员识别框的坐标信息,得到车辆坐标信息和人员坐标信息;Obtain the coordinate information of the vehicle identification frame and the coordinate information of the personnel identification frame, and obtain the vehicle coordinate information and the personnel coordinate information;
    根据所述车辆坐标信息和所述人员坐标信息,判断所述车辆和所述人员是否存在关联。According to the vehicle coordinate information and the person coordinate information, it is determined whether the vehicle and the person are related.
  5. 如权利要求4所述的人车信息关联方法,其中,所述车辆坐标信息至少包括车辆识别框左上角的顶点坐标值L 1和右下角的顶点坐标值R 1,所述人员坐 标信息至少包括人员识别框左上角的顶点坐标值L 2和右下角的顶点坐标值R 2The method for associating human-vehicle information according to claim 4, wherein the vehicle coordinate information at least includes a vertex coordinate value L 1 at the upper left corner of the vehicle identification frame and a vertex coordinate value R 1 at the lower right corner, and the personnel coordinate information at least includes The vertex coordinate value L 2 of the upper left corner of the personnel identification frame and the vertex coordinate value R 2 of the lower right corner;
    所述根据所述车辆坐标信息和所述人员坐标信息,判断所述车辆和所述人员是否存在关联,包括:The determining whether the vehicle and the person are related according to the vehicle coordinate information and the person coordinate information includes:
    将所述顶点坐标值L 1与所述顶点坐标值L 2进行比较,将所述顶点坐标值R 1与所述顶点坐标值R 2进行比较; comparing the vertex coordinate value L 1 with the vertex coordinate value L 2 , and comparing the vertex coordinate value R 1 with the vertex coordinate value R 2 ;
    根据比较结果确定所述车辆识别框与所述人员识别框的位置关系;Determine the positional relationship between the vehicle identification frame and the person identification frame according to the comparison result;
    根据所述位置关系,判断所述车辆和所述人员是否存在关联。According to the positional relationship, it is determined whether the vehicle and the person are related.
  6. 如权利要求5所述的人车信息关联方法,其中,所述根据比较结果确定所述车辆识别框与所述人员识别框的位置关系,包括:The method for associating person and vehicle information according to claim 5, wherein the determining the positional relationship between the vehicle identification frame and the person identification frame according to the comparison result comprises:
    若所述顶点坐标值L 1不大于所述顶点坐标值L 2,且所述顶点坐标值R 1不小于所述顶点坐标值R 2,或者,若所述顶点坐标值L 1不小于所述顶点坐标值L 2,且所述顶点坐标值R 1不大于所述顶点坐标值R 2,则确定所述车辆识别框和所述人员识别框的位置关系是包含关系; If the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not less than the vertex coordinate value R 2 , or if the vertex coordinate value L 1 is not less than the vertex coordinate value The vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , then it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an inclusive relationship;
    否则,确定所述车辆识别框和所述人员识别框的位置关系是非包含关系。Otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship.
  7. 如权利要求6所述的人车信息关联方法,其中,在所述确定所述车辆识别框和所述人员识别框的位置关系是非包含关系之后,所述方法还包括:The method for associating person-vehicle information according to claim 6, wherein after the determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, the method further comprises:
    判断所述顶点坐标值L 1是否不大于所述顶点坐标值L 2,且所述顶点坐标值R 1是否不大于所述顶点坐标值R 2,且所述顶点坐标值L 2是否小于所述顶点坐标值R 1Determine whether the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and whether the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and whether the vertex coordinate value L 2 is smaller than the vertex coordinate value vertex coordinate value R 1 ;
    若所述顶点坐标值L 1不大于所述顶点坐标值L 2,且所述顶点坐标值R 1不大于所述顶点坐标值R 2,且所述顶点坐标值L 2小于所述顶点坐标值R 1,则确定所述车辆识别框和所述人员识别框的位置关系是重叠关系; If the vertex coordinate value L 1 is not greater than the vertex coordinate value L 2 , and the vertex coordinate value R 1 is not greater than the vertex coordinate value R 2 , and the vertex coordinate value L 2 is less than the vertex coordinate value R 1 , then it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship;
    否则,确定所述车辆识别框和所述人员识别框的位置关系是非包含且非重叠关系。Otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
  8. 如权利要求7所述的人车信息关联方法,其中,在所述确定所述车辆识别框和所述人员识别框的位置关系是重叠关系之前,所述方法还包括:The method for associating human-vehicle information according to claim 7, wherein before the determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, the method further comprises:
    根据所述顶点坐标值L 2和所述顶点坐标值R 1,确定重叠面积; According to the vertex coordinate value L 2 and the vertex coordinate value R 1 , determine the overlapping area;
    根据所述顶点坐标值L 1和所述顶点坐标值R 2,确定参考面积; Determine the reference area according to the vertex coordinate value L 1 and the vertex coordinate value R 2 ;
    判断所述重叠面积占所述参考面积的比值是否大于第一预设阈值;judging whether the ratio of the overlapping area to the reference area is greater than a first preset threshold;
    若大于,则执行所述确定所述车辆识别框和所述人员识别框的位置关系是重叠关系的步骤;If it is greater than that, execute the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship;
    否则,执行所述确定所述车辆识别框和所述人员识别框的位置关系是非包含且非重叠关系的步骤。Otherwise, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
  9. 如权利要求6所述的人车信息关联方法,其中,在所述确定所述车辆识别框和所述人员识别框的位置关系是非包含关系之后,所述方法还包括:The method for associating person-vehicle information according to claim 6, wherein after the determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive relationship, the method further comprises:
    判断所述顶点坐标值L 2是否不大于所述顶点坐标值L 1,且所述顶点坐标值R 2是否不大于所述顶点坐标值R 1,且所述顶点坐标值L 1是否小于所述顶点坐标值R 2Determine whether the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and whether the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and whether the vertex coordinate value L 1 is smaller than the vertex coordinate value R 1 . vertex coordinate value R 2 ;
    若所述顶点坐标值L 2不大于所述顶点坐标值L 1,且所述顶点坐标值R 2不大于所述顶点坐标值R 1,且所述顶点坐标值L 1小于所述顶点坐标值R 2,则确定所述车辆识别框和所述人员识别框的位置关系是重叠关系; If the vertex coordinate value L 2 is not greater than the vertex coordinate value L 1 , and the vertex coordinate value R 2 is not greater than the vertex coordinate value R 1 , and the vertex coordinate value L 1 is less than the vertex coordinate value R 2 , then it is determined that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship;
    否则,确定所述车辆识别框和所述人员识别框的位置关系是非包含且非重叠关系。Otherwise, it is determined that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship.
  10. 如权利要求9所述的人车信息关联方法,其中,在所述确定所述车辆识别框和所述人员识别框的位置关系是重叠关系之前,所述方法还包括:The method for associating person-vehicle information according to claim 9, wherein before the determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship, the method further comprises:
    根据所述顶点坐标值R 2和所述顶点坐标值L 1,确定重叠面积; According to the vertex coordinate value R 2 and the vertex coordinate value L 1 , determine the overlapping area;
    根据所述顶点坐标值L 2和所述顶点坐标值R 1,确定参考面积; According to the vertex coordinate value L 2 and the vertex coordinate value R 1 , determine the reference area;
    判断所述重叠面积占所述参考面积的比值是否大于第一预设阈值;judging whether the ratio of the overlapping area to the reference area is greater than a first preset threshold;
    若大于,则执行所述确定所述车辆识别框和所述人员识别框的位置关系是重叠关系的步骤;If it is greater than that, execute the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is an overlapping relationship;
    否则,执行所述确定所述车辆识别框和所述人员识别框的位置关系是非包含且非重叠关系的步骤。Otherwise, the step of determining that the positional relationship between the vehicle identification frame and the person identification frame is a non-inclusive and non-overlapping relationship is performed.
  11. 如权利要求5至10中任一项所述的人车信息关联方法,其中,所述根据所述位置关系,判断所述车辆和所述人员是否存在关联,包括:The method for associating person-vehicle information according to any one of claims 5 to 10, wherein the determining whether the vehicle and the person are associated according to the positional relationship includes:
    在所述位置关系是包含关系时,确定所述车辆和所述人员存在关联;When the location relationship is an inclusive relationship, determining that the vehicle and the person are associated;
    在所述位置关系是重叠关系时,确定所述车辆和所述人员存在关联;When the positional relationship is an overlapping relationship, determining that the vehicle and the person are associated;
    在所述位置关系是非包含且非重叠关系时,确定所述车辆和所述人员不存在关联。When the positional relationship is a non-inclusive and non-overlapping relationship, it is determined that the vehicle and the person are not associated.
  12. 如权利要求11所述的人车信息关联方法,其中,在所述确定所述车辆和所述人员不存在关联之前,所述方法还包括:The method for associating person-vehicle information according to claim 11, wherein before the determining that the vehicle and the person are not associated, the method further comprises:
    根据所述顶点坐标值L 1和所述顶点坐标值R 1,确定所述车辆识别框的中心点坐标值C 1According to the vertex coordinate value L 1 and the vertex coordinate value R 1 , determine the center point coordinate value C 1 of the vehicle identification frame;
    根据所述顶点坐标值L 2和所述顶点坐标值R 2,确定所述人员识别框的中心点坐标值C 2According to the vertex coordinate value L 2 and the vertex coordinate value R 2 , determine the center point coordinate value C 2 of the person identification frame;
    根据所述中心点坐标值C 1和所述中心点坐标值C 2,确定所述车辆识别框的中心点到所述人员识别框的中心点的距离D 1According to the coordinate value C 1 of the center point and the coordinate value C 2 of the center point, determine the distance D 1 from the center point of the vehicle identification frame to the center point of the person identification frame;
    确定所述视频帧图像的对角线距离D 2determining the diagonal distance D 2 of the video frame image;
    判断所述距离D 1与所述对角线距离D 2的比值是否大于第二预设阈值; Determine whether the ratio of the distance D1 to the diagonal distance D2 is greater than a second preset threshold;
    若大于,则确定所述车辆和所述人员存在关联;If it is greater than, it is determined that the vehicle and the person are associated;
    否则,执行所述确定所述车辆和所述人员不存在关联的步骤。Otherwise, the step of determining that there is no association between the vehicle and the person is performed.
  13. 如权利要求12所述的人车信息关联方法,其中,在所述确定所述车辆和所述人员不存在关联之前,所述方法还包括:The method for associating person-vehicle information according to claim 12, wherein before the determining that the vehicle and the person are not associated, the method further comprises:
    对所述车辆识别框和所述人员识别框进行扩大,得到背景识别框;Enlarging the vehicle identification frame and the person identification frame to obtain a background identification frame;
    对所述背景识别框进行特征提取,根据提取到的特征确定所述车辆和所述人员所处的地点信息;Perform feature extraction on the background recognition frame, and determine the location information of the vehicle and the person according to the extracted features;
    确定所述车辆和所述人员在所述待处理视频中出现的时间信息;determining the time information of the appearance of the vehicle and the person in the video to be processed;
    获取与所述待处理视频存在关联的关联视频,所述关联视频为与拍摄所述待处理视频的摄像头处于同一区域内,但位于不同点位的摄像头拍摄到的视频;Acquire an associated video associated with the video to be processed, where the associated video is a video captured by a camera located in the same area as the camera that shoots the video to be processed, but located at a different point;
    根据所述地点信息和所述时间信息,从各点位提供的关联视频中提取关联视频帧图像;According to the location information and the time information, extract the associated video frame image from the associated video provided by each point;
    对各关联视频帧图像进行遍历,记录出现所述车辆的点位和出现所述人员的点位;Traverse each associated video frame image, and record the point where the vehicle appears and the point where the person appears;
    若出现所述车辆的点位与出现所述人员的点位的交集大于第三阈值,则确定所述车辆和所述人员存在关联;If the intersection of the point where the vehicle appears and the point where the person appears is greater than a third threshold, it is determined that the vehicle and the person are associated;
    否则,执行所述确定所述车辆和所述人员不存在关联的步骤。Otherwise, the step of determining that there is no association between the vehicle and the person is performed.
  14. 一种人车信息关联装置,包括:A person and vehicle information association device, comprising:
    获取模块,用于获取待处理视频;The acquisition module is used to acquire the video to be processed;
    识别模块,用于识别所述待处理视频中包括的车辆和人员,得到车辆识别框和人员识别框;an identification module for identifying vehicles and persons included in the to-be-processed video to obtain a vehicle identification frame and a person identification frame;
    提取模块,用于分别对所述车辆识别框和所述人员识别框进行特征提取,得到车辆信息和人员信息;an extraction module, configured to perform feature extraction on the vehicle identification frame and the person identification frame respectively to obtain vehicle information and personnel information;
    判断模块,用于根据所述车辆识别框和所述人员识别框,判断所述车辆和所述人员是否存在关联;a judgment module for judging whether the vehicle and the person are related according to the vehicle identification frame and the person identification frame;
    关联模块,用于在所述车辆和所述人员存在关联是,将所述车辆信息和所述人员信息进行关联。an association module, configured to associate the vehicle information with the personnel information when the vehicle and the person are associated.
  15. 一种人车信息关联设备,包括:A human-vehicle information association device, comprising:
    至少一个处理器;以及,at least one processor; and,
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至13中任一所述的人车信息关联方法。the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the execution of any one of claims 1 to 13 The method of association of people and vehicles information.
  16. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至13中中任一项所述的人车信息关联方法。A computer-readable storage medium storing a computer program, when the computer program is executed by a processor, the method for associating person-vehicle information according to any one of claims 1 to 13 is implemented.
PCT/CN2021/118538 2020-09-23 2021-09-15 Human-vehicle information association method and apparatus, and device and storage medium WO2022063002A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011009511.1 2020-09-23
CN202011009511.1A CN114255409A (en) 2020-09-23 2020-09-23 Man-vehicle information association method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022063002A1 true WO2022063002A1 (en) 2022-03-31

Family

ID=80788626

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/118538 WO2022063002A1 (en) 2020-09-23 2021-09-15 Human-vehicle information association method and apparatus, and device and storage medium

Country Status (2)

Country Link
CN (1) CN114255409A (en)
WO (1) WO2022063002A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921083A (en) * 2018-06-28 2018-11-30 浙江工业大学 Illegal flowing street pedlar recognition methods based on deep learning target detection
CN109214320A (en) * 2018-08-23 2019-01-15 中国电子科技集团公司电子科学研究院 People's vehicle correlating method and device based on video analysis
CN111063199A (en) * 2019-12-19 2020-04-24 深圳市捷顺科技实业股份有限公司 Method and device for associating vehicle with license plate and terminal equipment
CN111695429A (en) * 2020-05-15 2020-09-22 深圳云天励飞技术有限公司 Video image target association method and device and terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921083A (en) * 2018-06-28 2018-11-30 浙江工业大学 Illegal flowing street pedlar recognition methods based on deep learning target detection
CN109214320A (en) * 2018-08-23 2019-01-15 中国电子科技集团公司电子科学研究院 People's vehicle correlating method and device based on video analysis
CN111063199A (en) * 2019-12-19 2020-04-24 深圳市捷顺科技实业股份有限公司 Method and device for associating vehicle with license plate and terminal equipment
CN111695429A (en) * 2020-05-15 2020-09-22 深圳云天励飞技术有限公司 Video image target association method and device and terminal equipment

Also Published As

Publication number Publication date
CN114255409A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
Fujiyoshi et al. Deep learning-based image recognition for autonomous driving
US20190042888A1 (en) Training method, training apparatus, region classifier, and non-transitory computer readable medium
CN102708691B (en) False license plate identification method based on matching between license plate and automobile type
KR20210101313A (en) Face recognition method, neural network training method, apparatus and electronic device
Kuang et al. Feature selection based on tensor decomposition and object proposal for night-time multiclass vehicle detection
CN102902957A (en) Video-stream-based automatic license plate recognition method
CN106529494A (en) Human face recognition method based on multi-camera model
CN112651293B (en) Video detection method for road illegal spreading event
CN103324958B (en) Based on the license plate locating method of sciagraphy and SVM under a kind of complex background
CN104657724A (en) Method for detecting pedestrians in traffic videos
CN110826415A (en) Method and device for re-identifying vehicles in scene image
CN114049572A (en) Detection method for identifying small target
Yao et al. Coupled multivehicle detection and classification with prior objectness measure
Awang et al. Vehicle type classification using an enhanced sparse-filtered convolutional neural network with layer-skipping strategy
CN114596592B (en) Pedestrian re-identification method, system, equipment and computer readable storage medium
Chen et al. Vehicle type classification based on convolutional neural network
CN104751197A (en) Device and method for recognizing faces of drivers during vehicle running on basis of video analysis
Tumas et al. Acceleration of HOG based pedestrian detection in FIR camera video stream
WO2022063002A1 (en) Human-vehicle information association method and apparatus, and device and storage medium
Priya et al. Intelligent parking system
CN106023270A (en) Video vehicle detection method based on locally symmetric features
Wang et al. The color identification of automobiles for video surveillance
Chiu et al. A Two-stage Learning Approach for Traffic Sign Detection and Recognition.
Jourdheuil et al. Heterogeneous adaboost with real-time constraints-application to the detection of pedestrians by stereovision
CN115599938A (en) Truck re-identification method based on deep neural network and feature fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21871373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21871373

Country of ref document: EP

Kind code of ref document: A1