CN111383248A - Method and device for judging red light running of pedestrian and electronic equipment - Google Patents

Method and device for judging red light running of pedestrian and electronic equipment Download PDF

Info

Publication number
CN111383248A
CN111383248A CN201811648609.4A CN201811648609A CN111383248A CN 111383248 A CN111383248 A CN 111383248A CN 201811648609 A CN201811648609 A CN 201811648609A CN 111383248 A CN111383248 A CN 111383248A
Authority
CN
China
Prior art keywords
target person
image information
red light
line
lines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811648609.4A
Other languages
Chinese (zh)
Other versions
CN111383248B (en
Inventor
郑文先
尹义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201811648609.4A priority Critical patent/CN111383248B/en
Publication of CN111383248A publication Critical patent/CN111383248A/en
Application granted granted Critical
Publication of CN111383248B publication Critical patent/CN111383248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The application provides a method for pedestrians to run red light, which comprises the following steps: acquiring first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, tracking and detecting the target person in the red light period, judging that the target person runs the red light and acquiring second image information of the target person in the judgment line if the positions of the target person and the judgment line meet a preset position relationship, wherein the first image information and the second image information both comprise corresponding head portrait characteristics of the traffic indicator light and the target person, the judgment lines are at least two, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between the preset area at one end of the pedestrian crossing and the preset area at the other end of the pedestrian crossing at intervals; the first and at least two second image information are uploaded to a processing system. Because the determination lines are set to be at least two, the determination accuracy rate of the red light running is improved, and the first image information and the at least two second image information form an evidence, so that the quality of the collected evidence is improved.

Description

Method and device for judging red light running of pedestrian and electronic equipment
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and a device for judging red light running of a pedestrian and electronic equipment.
Background
Face image recognition is one of the image processing techniques commonly used in current traffic management, for example: and carrying out illegal recognition on the personnel participating in the traffic by using a face image recognition technology, and further managing the traffic illegal behaviors of the illegal personnel. At present, the people in the pedestrian crossing are mainly captured in the red light period for identifying the red light running so as to judge whether the people have the behavior of running the red light. However, in such a processing mode, because of influence factors such as vehicle running in a red light period, the accuracy of identification and judgment is not high, so that the evidence obtaining is not sufficient, even the problem of incorrect evidence obtaining is caused, and the evidence obtaining quality is not high.
Disclosure of Invention
The embodiment of the invention provides a method and a device for judging red light running of a pedestrian and electronic equipment, which can improve the accuracy of judging the red light running behavior, thereby improving the quality of evidence collection.
In a first aspect, an embodiment of the present invention provides a method for determining that a pedestrian runs a red light, including:
acquiring first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, wherein the first image information comprises first head image characteristics of the traffic indicator light and the target person;
tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and acquiring second image information of the target person when the judgment line is located, wherein the second image information comprises a traffic indicator light and second head portrait characteristics of the target person, the number of the judgment lines is at least two, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals;
uploading the first image information and the at least two second image information to a processing system.
Optionally, the method further includes:
acquiring third image information of a target person in a preset area at the other end of the pedestrian crossing in a red light period of the traffic indicator light, wherein the third image information comprises third head portrait characteristics of the traffic indicator light and the target person, and the third judgment line is positioned in the preset area at the other end of the pedestrian crossing;
the third image information is also uploaded to the processing system.
Optionally, the tracking detection includes:
acquiring characteristic length in a tracking image, wherein the characteristic length comprises any one of head-shoulder distance, shoulder distance and head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by carrying out weighted calculation on the corresponding characteristic length, the weighted coefficient is in negative correlation with the distance from the determination line to a preset area at one end of the pedestrian crossing, and the detection lines are used for detecting the position relation with the corresponding determination line;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
Optionally, the tracking detection includes:
acquiring a characteristic length h in the first image information, wherein the characteristic length h comprises any one of a head-shoulder distance, a shoulder distance and a head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by weighting and calculating the characteristic length h, the weighted coefficient is in negative correlation with the distance from the determination lines to a preset area at one end of the pedestrian crossing, and the detection lines are used for detecting the position relation with the corresponding determination lines;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
Optionally, the determining that the target person runs the red light includes:
acquiring a track line of a target person in a tracking image;
and judging whether the target person runs the red light or not according to the intersection relation between the trajectory line of the target person and the judgment line.
Optionally, the determining whether the target person runs the red light according to the intersection relationship between the trajectory line of the target person and the determination line includes:
the number of determination lines intersecting the trajectory line of the target person is acquired,
if the number of the determination lines intersected with the target person trajectory line reaches a preset number threshold value, determining that the target person runs the red light;
and if the number of the determination lines intersected with the trajectory of the target person does not reach a preset number threshold, determining that the target person does not run the red light.
Optionally, the processing system includes at least one of an alarm system and a traffic management system, and before the uploading the first image information and the at least two second image information to the processing system, the method further includes:
acquiring a history red light running record of a target person in a processing system;
and uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the historical red light running record of the target personnel, wherein the alarm system is used for prompting pedestrians to run the red light, and the traffic management system is used for punishing the red light running of the target personnel.
Optionally, the uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the history red light running record of the target person includes:
if the red light running times in the historical red light running records of the target personnel are smaller than a preset time threshold value, uploading the first image information and the at least two second image information to an alarm system;
and if the red light running times in the historical red light running record of the target person are smaller than a preset time threshold value, uploading the first image information and the at least two pieces of second image information to an alarm system and a traffic management system.
In a second aspect, an embodiment of the present invention provides a device for determining that a pedestrian runs a red light, including:
the first acquisition module is used for acquiring first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, wherein the first image information comprises first head image characteristics of the traffic indicator light and the target person;
the second obtaining module is used for tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and obtaining second image information of the target person when the target person is on the judgment line, wherein the second image information comprises at least two traffic indicator lights and second head portrait characteristics of the target person, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals;
and the uploading module is used for uploading the first image information and the at least two second image information to a processing system.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the pedestrian red light running judgment method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the pedestrian red light running judgment method provided by the embodiment of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the method for determining that a pedestrian runs a red light provided by the embodiment of the present invention.
In the embodiment of the invention, first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light is obtained, wherein the first image information comprises first portrait characteristics of the traffic indicator light and the target person; tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and acquiring second image information of the target person when the judgment line is located, wherein the second image information comprises a traffic indicator light and second head portrait characteristics of the target person, the number of the judgment lines is at least two, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals; uploading the first image information and the at least two second image information to a processing system. Because the cross walk area demarcates the determination line in the red light period, the position relation between the target person and the determination line is determined by tracking the target person, so that the red light running of the target person is determined.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for determining that a pedestrian runs a red light according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another method for determining that a pedestrian runs a red light according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for determining that a pedestrian runs a red light according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another device for judging whether a pedestrian runs a red light according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another device for judging whether a pedestrian runs a red light according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another device for judging whether a pedestrian runs a red light according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another device for judging whether a pedestrian runs a red light according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for determining that a pedestrian runs a red light according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
101. the method comprises the steps of obtaining first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, wherein the first image information comprises first head image characteristics of the traffic indicator light and the target person.
The red light period refers to a red light period of a pedestrian crossing pedestrian indicator light, and in some possible embodiments, the red light may also refer to a pedestrian crossing traffic time period converted from a motor vehicle lane indicator light (traffic light); for example, in the time period for aligning the pedestrian crossing by manually pulling and releasing the line, an auxiliary police prohibits the person from passing the pedestrian crossing in the wiring or releases the line to indicate the person to pass the pedestrian crossing; if the current intersection only has the motor lane indicating lamp, the red lamp period of the crosswalk is determined by taking the motor lane indicating lamp as the green lamp period, namely the vehicle walks along with the lamp, the vehicle walks and the person stops, and the vehicle stops and walks; the traffic light of the present embodiment is preferably a red light cycle of a pedestrian light provided separately on a crosswalk. The red light cycle starts with the red light turned on and ends with the red light turned off, but in some possible embodiments, the red light may end with the yellow light turned off. The preset area at one end of the pedestrian crossing can be understood as a junction area of the pedestrian crossing and the sidewalk, and can be an area near a boundary line of the pedestrian crossing and the sidewalk, such as an area inside the boundary line, or an area outside the boundary line. The size of the preset area can be preset according to the needs of the user. Judging whether the target person is in the area, wherein the judgment can be performed by a method of intersecting the scribe lines, such as a virtual scribe line and a physical scribe line, the virtual scribe line can be understood as a virtual scribe line obtained according to image processing, and can be displayed through an image, for example, a red line is scribed in the image to represent a scribe line of a preset area, of course, the virtual scribe line can be scribed by using an actual object as a reference object, such as scribing by using a traffic light as a reference object, and the preset area can be two sides of the virtual scribe line; the solid marking may be a solid marking, the solid marking may be a boundary between a pedestrian crossing and a sidewalk, and of course, the solid marking may also be a line (such as a first zebra crossing or a first zebra crossing interval) closest to the sidewalk in the pedestrian crossing, or other self-planned lines, and the preset area may be both sides of the solid marking. The target person indicates that the target person has the intention of running red light in the preset area, for example: when the position between the target person and the scribing line is less than a certain distance, the target person can be considered to have the intention of running the red light, and the judgment can be carried out by setting the distance threshold value between the target person and the scribing line. Specifically, by calculating the distance between the avatar feature of the target person and the scribe line, for example, in the detection process, whether the distance between the center point of the avatar feature detection frame and the scribe line reaches a preset distance determination threshold is determined, or of course, other feature points besides the center point are also possible, and only a specific relationship between the feature point and the avatar feature is required, for example, four corner points of the avatar feature detection frame. The method for judging the presence of the target person in the preset area may further include: and acquiring connection lines between other feature positions and the head portrait feature position, and judging whether the target person is in the preset area or not according to the intersection condition of the connection lines and the marking lines of the preset area, such as the connection line of the head portrait feature and the shoulder feature, or the connection line of the head portrait feature and the hand feature. The method for judging the presence of the target person in the preset area can also be used; and acquiring the range of the preset area, and judging whether the head portrait characteristics or other characteristics of the target person have a preset position relationship with the preset area, such as intersection or overlapping of the characteristic frame and the preset area. The first image information may be acquired by a camera device disposed opposite the traffic light. The first portrait characteristics may be extracted from the first image information by an image recognition engine, and the first portrait characteristics may be characteristics of a human face, a hair, glasses, a hat, and the like.
102. Tracking and detecting the target person in the red light period, if the position of the target person and the position of the determination line meet the preset position relationship, judging that the target person runs the red light and acquiring second image information of the target person when the target person is on the determination line, wherein the second image information comprises a traffic indicator light and second head portrait characteristics of the target person, the number of the determination lines is at least two, the number of the second image information corresponds to the number of the determination lines, and the at least two determination lines are arranged between a preset area at one end of the crosswalk and a preset area at the other end of the crosswalk at intervals.
The tracking detection may be to detect a target person by using an image tracking algorithm, and detect whether the target person moves to a determination line, where the determination line may be a virtual scribe line, the virtual scribe line may be understood as a virtual scribe line obtained by image processing, and may be displayed by an image, for example, a red line is drawn in the image to indicate the determination line, and of course, the virtual scribe line of the determination line is drawn by using an actual object as a reference object, for example, a traffic light is drawn by using a reference object, or may be drawn by referring to the first determination line; the solid marking can also be a solid marking, and the solid marking can be understood as marking a line at any position in a pedestrian crossing, such as any zebra crossing except the first zebra crossing or any zebra crossing interval except the first zebra crossing interval, and other self-planned lines and the like. The determination lines may be provided between the first determination line and the imaging device, and the plurality of determination lines may be provided at uniform intervals or non-uniform intervals. The above-mentioned position relationship between the detection target person and the determination line can be detected by referring to the line drawing method in the detection method in step 101, and will not be described herein again. When the target person is on the judgment line, second image information of the target person is acquired, the second image information of the target person can be shot in a high-definition mode through a camera device under the trigger condition of the target person, and the trigger condition can be that the positions of the target person and the judgment line meet the preset position relationship; in a possible embodiment, the triggering condition may be that the position of the target person and the determination line satisfy a preset positional relationship, and the positional relationship may be a distance threshold between the target person and the determination line, and the traffic light in the first image information is a red light, or the traffic light in the second image information is a red light. It should be noted that. The second Image information may be an Image selected from the Image-captured video, and for example, an Image of a frame having the highest Image quality score is selected from the Image-captured video as the second Image information by Image Quality Assessment (IQA). The second avatar characteristics can be extracted from the second image information through an image recognition engine, and the second avatar characteristics can be characteristics such as a face characteristic, a hair characteristic, a glasses characteristic, a hat characteristic and the like. It should be noted that the determination lines may be two or more, and each time the target person appears on one determination line, the second image information is acquired once to form an evidence group that the target person runs the red light. The image pickup device that acquires the second image information may be the same image pickup device as the image pickup device that acquires the first image information, or may be a separate image pickup device that acquires the second image information.
103. Uploading the first image information and the at least two second image information to a processing system.
The evidence images can be at least three large images formed by the first image and the at least two second images, or head portrait characteristics of the target person, which are provided by the at least three large images through the image recognition engine, can be used as small images to form an evidence group with the at least three large images, so that the strength of the evidence is enhanced. The processing system may include a real-time warning system, a management system of a traffic management department, a recording system, and the like, the real-time warning system is used to warn a target person, the management system of the traffic management department is used to penalize the target person, and the recording system may form a record of the current red light running behavior of the target person.
In the above steps, a first determination line and a determination line are defined in a crosswalk area in a red light period, the target person is tracked, and the position relation between the target person and the determination line is determined, so that the red light running of the target person is determined.
It should be noted that the method for judging that a pedestrian runs a red light provided by the embodiment of the present invention can be applied to a device for judging that a pedestrian runs a red light, for example: and the computer, the server, the mobile phone and the like can be used for judging whether the pedestrian runs the red light. The camera device in the embodiment of the present invention may be a camera, for example: the camera device that can gather image information such as angularly adjustable camera, wide-angle camera, 2D camera, 3D camera. The avatar characteristic in the embodiment of the present invention may also be referred to as an avatar characteristic value or an avatar.
In the embodiment of the invention, first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light is obtained, wherein the first image information comprises first portrait characteristics of the traffic indicator light and the target person; tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and acquiring second image information of the target person when the judgment line is located, wherein the second image information comprises a traffic indicator light and second head portrait characteristics of the target person, the number of the judgment lines is at least two, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals; uploading the first image information and the at least two second image information to a processing system. Because the cross walk area demarcates the determination line in the red light period, the position relation between the target person and the determination line is determined by tracking the target person, so that the red light running of the target person is determined.
In one possible embodiment, the method further comprises:
203. acquiring third image information of a target person in a preset area at the other end of the pedestrian crossing in a red light period of the traffic indicator light, wherein the third image information comprises third head portrait characteristics of the traffic indicator light and the target person, and the third judgment line is positioned in the preset area at the other end of the pedestrian crossing;
204. the third image information is also uploaded to the processing system.
The detection that the target person appears in the preset area at the other end of the pedestrian crossing can be performed according to the method in step 101, when the target person appears in the preset area at the other end of the pedestrian crossing, third image information of the target person can be obtained, the third image information of the target person can be shot in a high-definition mode through a camera device under the trigger condition of the target person, and the trigger condition can be that the target person appears in the preset area at the other end of the pedestrian crossing. The third Image information of the target person may also be selected from the Image tracking video, for example, a frame Image with the highest Image Quality score is selected from the tracking video as the third Image information through Image Quality Assessment (IQA). The third avatar characteristics can be extracted from the third image information through an image recognition engine, and the third avatar characteristics can be characteristics such as a face characteristic, a hair characteristic, a glasses characteristic, a hat characteristic and the like. It should be noted that the third image information may be combined with the first image information and the second image information to form an evidence group that the target person runs the red light. The imaging device that acquires the third image information may be the same imaging device as the imaging device that acquires the first image information, or may be an imaging device that is provided separately and acquires the third image information. In a possible implementation manner, the avatar characteristics in the third image information may be extracted as the recognition object for recognition, so as to obtain the identity information of the target person.
In this embodiment, the third image information closest to the imaging device is acquired as the red light running evidence of the target person, so that the quality of the evidence image can be further improved.
In an alternative embodiment, the tracking detection comprises:
acquiring characteristic length in a tracking image, wherein the characteristic length comprises any one of head-shoulder distance, shoulder distance and head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the detection line length is obtained by performing weighted calculation on the corresponding characteristic length, the weighted coefficient is in negative correlation with the distance from the determination line to the first determination line, and the detection lines are used for detecting the position relation with the corresponding determination line;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
The above-mentioned tracking image may be understood as a tracking video of the target person, and may be obtained by a camera device in real time, and the above-mentioned feature length may be understood as a distance between features in the tracking image, such as a head-shoulder distance between a head portrait feature and a shoulder feature, a shoulder distance between two shoulder features, a head midline length, and the like.
The number of detection lines generated is the same as the number of determination lines, the feature length direction may be understood as a direction in which a distance between features is present, for example, a direction of a head-shoulder distance between an avatar feature and a shoulder feature may be a line connecting the avatar feature and the shoulder feature, a length of the line may be a distance of the head-shoulder distance between the avatar feature and the shoulder feature, the line may be extended or shortened by a weighting coefficient, and the extension or shortening may be performed by extending or shortening one feature away from the determination lines as a reference point, and the corresponding detection lines may be obtained by extending or shortening the line. For example: assuming that the number of the determination lines is 3, the determination lines are sequentially set as a determination line 1, a determination line 2 and a determination line 3, in the detection process, corresponding detection lines L1, L2 and L3 are respectively generated in the head-shoulder distance direction, when the determination line 1 is detected, the head-shoulder distance h1 of the target person can be obtained, multiplied by a preset weighting coefficient x1 to obtain the length of a detection line L1, whether the L1 intersects with the determination line 1 or not is judged, the positional relationship between the detection line L1 and the determination line 1 is obtained, and if the detection line L1 intersects with the determination line 1, the target person is considered to be located on the determination line 1; when the determination line 2 is detected, acquiring the head-shoulder distance h2 of the target person, multiplying the head-shoulder distance h2 by a preset weighting coefficient x2 to obtain the length of a detection line L2, judging whether the L2 and the determination line 2 intersect or not to obtain the position relationship between the detection line L2 and the determination line 2, and if the detection line L2 and the determination line 2 intersect, determining that the target person is located on the determination line 2; similarly, the above-described method may be employed when the determination line 3 is detected. Note that, the weighting coefficients are tapered according to the arrangement order of the determination lines, and for example, when the distance between the determination line 1 and the first determination line in the above example is the smallest, the corresponding weighting coefficient x1 is the largest and x3 is the smallest, and the specific weighting coefficient may be determined according to the installation distance and angle of the imaging device and the installation position and number of the determination lines.
In the optional embodiment, the detection line is generated by using the characteristics in the tracking image, and the detection line is used for judging the positions of the target person and the judgment line, so that whether the target person runs the red light or not is judged, the detection precision is improved, the judgment accuracy rate of running the red light is further improved, and meanwhile, the target person is abstracted into a line, so that the detection efficiency can be improved.
In an alternative embodiment, the tracking detection comprises:
acquiring a characteristic length h in the first image information, wherein the characteristic length h comprises any one of a head-shoulder distance, a shoulder distance and a head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by weighting and calculating the characteristic length h, the weighted coefficient is in negative correlation with the distance from the determination line to the first determination line, and the detection lines are used for detecting the position relation with the corresponding determination line;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
The feature length h may be understood as a distance between features in the first image information, such as a head-shoulder distance between the head portrait feature and the shoulder feature, a shoulder distance between two shoulder features, a head midline length, and the like.
The number of detection lines generated is the same as the number of determination lines, the feature length direction may be understood as a direction in which a distance between features is present, for example, a direction of a head-shoulder distance between an avatar feature and a shoulder feature may be a line connecting the avatar feature and the shoulder feature, a length of the line may be a distance of the head-shoulder distance between the avatar feature and the shoulder feature, the line may be extended or shortened by a weighting coefficient, and the extension or shortening may be performed by extending or shortening one feature away from the determination lines as a reference point, and the corresponding detection lines may be obtained by extending or shortening the line. For example: assuming that the characteristic length h in the first image information is a head-shoulder distance h, the number of the determination lines is 3, and the determination lines are sequentially set as a determination line 1, a determination line 2, and a determination line 3, in the detection process, corresponding detection lines L1, L2, and L3 are respectively generated in the head-shoulder distance direction, when the determination line 1 is detected, the head-shoulder distance h of the target person is multiplied by a preset weighting coefficient x1 to obtain the length x1h of the detection line L1, whether L1 intersects with the determination line 1 is determined, the positional relationship between the detection line L1 and the determination line 1 is obtained, and if the characteristic length h intersects with the determination line, the target person is considered to be located on the determination line 1; when the determination line 2 is detected, multiplying the head-shoulder distance h of the target person by a preset weighting coefficient x2 to obtain the length x2h of the detection line L2, judging whether the L2 intersects with the determination line 2 or not to obtain the position relation between the detection line L2 and the determination line 2, and if the detection line L2 intersects with the determination line 2, determining that the target person is positioned on the determination line 2; similarly, the above-described method may be employed when the determination line 3 is detected. Note that, the weighting coefficients are tapered according to the arrangement order of the determination lines, and for example, when the distance between the determination line 1 and the first determination line in the above example is the smallest, the corresponding weighting coefficient x1 is the largest and x3 is the smallest, and the specific weighting coefficient may be determined according to the installation distance and angle of the imaging device and the installation position and number of the determination lines.
In this optional embodiment, the detection line is generated by using the features in the first image information, and the detection line is used to determine the positions of the target person and the determination line, so as to determine whether the target person runs a red light, thereby improving the detection accuracy and further improving the determination accuracy of running the red light.
In one possible embodiment, the determining that the target person runs the red light includes:
acquiring a track line of a target person in a tracking image;
and judging whether the target person runs the red light or not according to the intersection relation between the trajectory line of the target person and the judgment line.
The trajectory line may be obtained by tracking the video, for example, feature positions in consecutive image frames in the video may be captured, and the trajectory line may be obtained according to consecutive position changes. The above-described trajectory lines are used to describe the position change of the target person in the crosswalk. Whether the target person runs the red light or not can be judged by judging whether the trajectory line intersects with the judgment line or not, for example, if the trajectory line intersects with the judgment line, the target person can be considered to run the red light, if the trajectory line intersects with the judgment line part, the target can be considered to have suspicion of running the red light, and if the trajectory line intersects with a small part of the judgment line, the target can be considered to have no red light running.
Through the intersection relation between the track line and the judgment line, the red light running behavior of the target person can be further determined, so that the judgment accuracy rate of the red light running of the target person is improved.
In a possible implementation manner, the determining whether the target person runs the red light according to the intersection relationship between the trajectory line of the target person and the determination line includes:
the number of determination lines intersecting the trajectory line of the target person is acquired,
if the number of the determination lines intersected with the target person trajectory line reaches a preset number threshold value, determining that the target person runs the red light;
and if the number of the determination lines intersected with the trajectory of the target person does not reach a preset number threshold, determining that the target person does not run the red light.
The shape of the trajectory line can be determined by the positions of the features in the trajectory line, the number of intersecting determination lines can be calculated by the end point position of the trajectory line, that is, the position of the current target person, for example, the number of the determination lines is 5, and if the position of the end point of the trajectory line is between the third trajectory line and the fourth trajectory line, the number of the determination lines intersecting the trajectory line is determined to be 3, that is, the target person reaches the third determination line in the crosswalk. Whether the target person runs the red light or not can be judged through the number of the judgment lines intersected with the trajectory line, for example, if the trajectory line is intersected with 80% of the judgment lines, the target person can be considered to run the red light, and if the number of the intersection lines of the trajectory line and the judgment lines does not reach 40% of the number of the judgment lines, the target person can be considered not to run the red light, for example, the target person walks into a crosswalk but in a lane direction, for example, driving or other reasons, so that the target person does not need to run the red light.
In the embodiment, whether the target person has the red light running behavior is determined by calculating the number of intersections between the trajectory line and the determination line, so that the accuracy of determining the red light running is further improved.
In a possible embodiment, the processing system comprises at least one of an alarm system, a traffic management system, and before the uploading of the first image information and the at least two second image information to the processing system, the method further comprises:
acquiring a history red light running record of a target person in a processing system;
and uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the historical red light running record of the target personnel, wherein the alarm system is used for prompting pedestrians to run the red light, and the traffic management system is used for punishing the red light running of the target personnel.
The method comprises the steps that historical red light running records of target personnel can be inquired through head image features of the target personnel, specifically, the target personnel are identified through the head image features of the target personnel, identity information of the target personnel is obtained, the static library stored with the red light running records of the personnel is searched according to the identity information of the target personnel, the historical red light running records of the target personnel are obtained, the red light running information of the target personnel is sent to a corresponding processing system through the historical red light running records of the target personnel, for example, if the number of the historical red light running records of the target personnel is smaller than a threshold value, the red light running information of the target personnel is sent to an alarm system, and the alarm system generates alarm information and sends the alarm information to field equipment in real time to display so as to form an alarm for the target personnel; and if the historical red light running recording times of the target personnel are within a threshold value, sending the red light running information of the target personnel to a management system of a traffic management department, generating penalty information and sending the penalty information to the communicable equipment or address of the target personnel.
In this embodiment, select different processing systems through target person's history record of making a dash across red light and carry out the evidence and upload, can carry out the pertinence to the evidence and handle, for example, target person history makes a dash across red light less frequently, can regard target person to make a dash across red light by mistake, and target person history makes a dash across red light more frequently, can regard target person to make a dash across red light etc. intentionally.
In a possible embodiment, the uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the history of the target person running the red light comprises:
if the red light running times in the historical red light running records of the target personnel are smaller than a preset time threshold value, uploading the first image information and the at least two second image information to an alarm system;
and if the red light running times in the historical red light running record of the target person are smaller than a preset time threshold value, uploading the first image information and the at least two pieces of second image information to an alarm system and a traffic management system.
The red light running times can be obtained in the last optional embodiment, the preset time threshold value can be set by a user, the red light running times of the target personnel is smaller than the preset time threshold value, the target personnel can be indicated that the target personnel do not run the red light, and at the moment, the target personnel can be reminded only through the warning system. In addition, the information of the target person running the red light can be uploaded to a static library to be used as a history red light running record. The red light running times of the target personnel are larger than the preset time threshold value, which indicates that the target personnel intentionally run the red light and can carry out punishment through the traffic management system. The above-mentioned prompting of the target person by the warning system may be that the warning system generates the prompt information, and sends the prompt information to the crosswalk intersection where the target person is located to display the prompt information so as to remind the target person, and the prompt information may be displayed by the field device arranged at the crosswalk intersection. When the target personnel run the red light, danger exists, and in order to remind the target personnel of danger, the target personnel who run the red light historically for any number of times can be prompted.
In this embodiment, the number of times of the target person's historical red light running is determined by the threshold, so that the behavior of the target person running the red light is classified, and the behavior of the target person running the red light is processed in a targeted manner according to the classification.
In this embodiment, various optional implementation manners are added on the basis of the embodiment shown in fig. 1, and the accuracy of determining the behavior of running a red light can be further improved, so that the evidence quality is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a device for determining a red light running of a pedestrian according to an embodiment of the present invention, as shown in fig. 3, including:
the first obtaining module 501 is configured to obtain first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, where the first image information includes first head image features of the traffic indicator light and the target person;
a second obtaining module 502, configured to perform tracking detection on the target person in the red light period, and if it is detected that the positions of the target person and the determination line meet a preset positional relationship, determine that the target person runs the red light and obtain second image information of the target person when the determination line is located, where the second image information includes a traffic indicator and a second head portrait feature of the target person, the number of the determination lines is at least two, the number of the second image information corresponds to the number of the determination lines, and the at least two determination lines are arranged between a preset area at one end of the pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals;
an uploading module 503, configured to upload the first image information and the at least two second image information to a processing system.
Optionally, as shown in fig. 4, the apparatus further includes:
a third obtaining module 504, configured to obtain third image information of the target person in a preset area at the other end of the pedestrian crossing when the traffic indicator light is in the red light period, where the third image information includes third head portrait features of the traffic indicator light and the target person, and the third determination line is located in the preset area at the other end of the pedestrian crossing;
the upload module 503 is further configured to upload the third image information to the processing system as well.
Optionally, as shown in fig. 5, the second obtaining module 502 includes:
a first acquiring unit 5021, configured to acquire a feature length in a tracking image, where the feature length includes any one of a head-shoulder distance, a shoulder distance, and a head midline length;
a generating unit 5022, configured to correspondingly generate detection lines along the characteristic length direction according to the number of the determination lines, where the length of the detection line is obtained by performing weighted calculation on the corresponding characteristic length, a coefficient of the weighting is in negative correlation with a distance from the determination line to a preset area at one end of the crosswalk, and the detection line is used for detecting a positional relationship with the corresponding determination line;
a first judging unit 5023, configured to judge that the target person appears on the corresponding judgment line if a detection line intersects with the corresponding judgment line.
Optionally, as shown in fig. 5, the obtaining unit 5021 is configured to obtain a characteristic length h in the first image information, where the characteristic length h includes any one of a head-shoulder distance, a shoulder distance, and a head midline length;
the generating unit 5022 is used for correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, the length of the detection lines is obtained by weighting and calculating the characteristic length h, the weighted coefficient is in negative correlation with the distance from the determination lines to a preset area at one end of the crosswalk, and the detection lines are used for detecting the position relation of the detection lines and the corresponding determination lines;
the determination unit 5023 is configured to determine that the target person appears on the corresponding determination line if a detection line intersects with the corresponding determination line.
Optionally, as shown in fig. 6, the second obtaining module 502 includes:
a second acquiring unit 5024, configured to acquire a trajectory line of the target person in the tracking image;
the second determination unit 5025 is used for determining whether the target person runs the red light according to the intersection relationship between the trajectory line of the target person and the determination line.
Alternatively, as shown in fig. 6, the second acquiring unit 5024 is used for acquiring the number of determination lines intersecting with the trajectory of the target person,
the second determination unit 5025 is used for determining that the target person runs the red light if the number of the determination lines intersected with the trajectory of the target person reaches a preset number threshold;
the second determination unit 5025 is also used for determining that the target person does not run the red light if the number of determination lines intersecting the trajectory line of the target person does not reach a preset number threshold.
Optionally, as shown in fig. 7, the processing system includes at least one of an alarm system and a traffic management system, and the uploading module 503 includes:
a third obtaining unit 5031, configured to obtain a history red light running record of a target person in the processing system;
an uploading unit 5032, configured to upload the first image information and the at least two pieces of second image information to at least one of an alarm system and a traffic management system according to the history red light running record of the target person, where the alarm system is configured to prompt a pedestrian to run a red light, and the traffic management system is configured to perform red light running punishment on the target person.
Optionally, the uploading unit 5032 is configured to upload the first image information and the at least two second image information to an alarm system if the red light running times in the history red light running record of the target person are smaller than a preset time threshold;
the uploading unit 5032 is further configured to upload the first image information and the at least two second image information to an alarm system and a traffic management system if the red light running frequency in the history red light running record of the target person is less than a preset frequency threshold.
It should be noted that the above device can be applied to a device for judging that a pedestrian runs a red light, for example: and the computer, the server, the mobile phone and the like can be used for judging the red light running of the pedestrian.
The device for judging whether a pedestrian runs the red light provided by the embodiment of the invention can realize each implementation mode in the method embodiments of fig. 1 and fig. 2 and corresponding beneficial effects, and is not repeated herein for avoiding repetition.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 8, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein:
the processor 601 is used for calling the computer program stored in the memory 602, and executing the following steps:
acquiring first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, wherein the first image information comprises first head image characteristics of the traffic indicator light and the target person;
tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and acquiring second image information of the target person when the judgment line is located, wherein the second image information comprises a traffic indicator light and second head portrait characteristics of the target person, the number of the judgment lines is at least two, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals;
uploading the first image information and the at least two second image information to a processing system.
Processor 601 further performs operations comprising:
acquiring third image information of a target person in a preset area at the other end of the pedestrian crossing in a red light period of the traffic indicator light, wherein the third image information comprises third head portrait characteristics of the traffic indicator light and the target person, and the third judgment line is positioned in the preset area at the other end of the pedestrian crossing;
the uploading of the first image information and the at least two second image information to a processing system performed by processor 601 includes:
the third image information is also uploaded to the processing system.
Optionally, the tracking detection performed by the processor 601 includes:
acquiring characteristic length in a tracking image, wherein the characteristic length comprises any one of head-shoulder distance, shoulder distance and head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by carrying out weighted calculation on the corresponding characteristic length, the weighted coefficient is in negative correlation with the distance from the determination line to a preset area at one end of the pedestrian crossing, and the detection lines are used for detecting the position relation with the corresponding determination line;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
Optionally, the tracking detection performed by the processor 601 includes:
acquiring a characteristic length h in the first image information, wherein the characteristic length h comprises any one of a head-shoulder distance, a shoulder distance and a head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by weighting and calculating the characteristic length h, the weighted coefficient is in negative correlation with the distance from the determination lines to a preset area at one end of the pedestrian crossing, and the detection lines are used for detecting the position relation with the corresponding determination lines;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
Optionally, the determining that the target person runs the red light by the processor 601 includes:
acquiring a track line of a target person in a tracking image;
and judging whether the target person runs the red light or not according to the intersection relation between the trajectory line of the target person and the judgment line.
Optionally, the determining, executed by the processor 601, whether the target person runs the red light according to the intersection relationship between the trajectory line of the target person and the determination line includes:
the number of determination lines intersecting the trajectory line of the target person is acquired,
if the number of the determination lines intersected with the target person trajectory line reaches a preset number threshold value, determining that the target person runs the red light;
and if the number of the determination lines intersected with the trajectory of the target person does not reach a preset number threshold, determining that the target person does not run the red light.
Optionally, the processing system includes at least one of an alarm system and a traffic management system, and before the processor 601 uploads the first image information and the at least two second image information to the processing system, the processor 601 further performs steps including:
acquiring a history red light running record of a target person in a processing system;
and uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the historical red light running record of the target personnel, wherein the alarm system is used for prompting pedestrians to run the red light, and the traffic management system is used for punishing the red light running of the target personnel.
Optionally, the uploading, by the processor 601, the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the history red light running record of the target person includes:
if the red light running times in the historical red light running records of the target personnel are smaller than a preset time threshold value, uploading the first image information and the at least two second image information to an alarm system;
and if the red light running times in the historical red light running record of the target person are smaller than a preset time threshold value, uploading the first image information and the at least two pieces of second image information to an alarm system and a traffic management system.
It should be noted that the electronic device may be a device for judging that a pedestrian runs a red light, for example: and the computer, the server, the mobile phone and the like can be used for judging the red light running of the pedestrian.
The device for judging whether a pedestrian runs the red light provided by the embodiment of the invention can realize each implementation mode in the method embodiments of fig. 1 and fig. 2 and corresponding beneficial effects, and is not repeated herein for avoiding repetition.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program realizes each process of the embodiment of the method for judging whether the pedestrian runs the red light, which is provided by the embodiment of the invention, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (10)

1. A method for judging whether a pedestrian runs a red light is characterized by comprising the following steps:
acquiring first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, wherein the first image information comprises first head image characteristics of the traffic indicator light and the target person;
tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and acquiring second image information of the target person when the judgment line is located, wherein the second image information comprises a traffic indicator light and second head portrait characteristics of the target person, the number of the judgment lines is at least two, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals;
uploading the first image information and the at least two second image information to a processing system.
2. The method of claim 1, wherein the method further comprises:
acquiring third image information of a target person in a preset area at the other end of the pedestrian crossing in a red light period of the traffic indicator light, wherein the third image information comprises third head portrait characteristics of the traffic indicator light and the target person, and the third judgment line is positioned in the preset area at the other end of the pedestrian crossing;
the third image information is also uploaded to the processing system.
3. The method of claim 1, wherein the tracking detection comprises:
acquiring characteristic length in a tracking image, wherein the characteristic length comprises any one of head-shoulder distance, shoulder distance and head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by carrying out weighted calculation on the corresponding characteristic length, the weighted coefficient is in negative correlation with the distance from the determination line to a preset area at one end of the pedestrian crossing, and the detection lines are used for detecting the position relation with the corresponding determination line;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
4. The method of claim 1, wherein the tracking detection comprises:
acquiring a characteristic length h in the first image information, wherein the characteristic length h comprises any one of a head-shoulder distance, a shoulder distance and a head midline length;
correspondingly generating detection lines along the characteristic length direction according to the number of the determination lines, wherein the length of the detection lines is obtained by weighting and calculating the characteristic length h, the weighted coefficient is in negative correlation with the distance from the determination lines to a preset area at one end of the pedestrian crossing, and the detection lines are used for detecting the position relation with the corresponding determination lines;
and if the detection line is intersected with the corresponding judgment line, judging that the target person appears on the corresponding judgment line.
5. The method of claim 1, wherein said determining that the target person is running a red light comprises:
acquiring a track line of a target person in a tracking image;
and judging whether the target person runs the red light or not according to the intersection relation between the trajectory line of the target person and the judgment line.
6. The method as claimed in claim 5, wherein the determining whether the target person runs the red light according to the intersection relationship between the trajectory line of the target person and the determination line comprises:
the number of determination lines intersecting the trajectory line of the target person is acquired,
if the number of the determination lines intersected with the target person trajectory line reaches a preset number threshold value, determining that the target person runs the red light;
and if the number of the determination lines intersected with the trajectory of the target person does not reach a preset number threshold, determining that the target person does not run the red light.
7. The method of any one of claims 1 to 6, wherein the processing system comprises at least one of an alarm system, a traffic management system, and prior to said uploading the first image information and the at least two second image information to the processing system, the method further comprises:
acquiring a history red light running record of a target person in a processing system;
and uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system according to the historical red light running record of the target personnel, wherein the alarm system is used for prompting pedestrians to run the red light, and the traffic management system is used for punishing the red light running of the target personnel.
8. The method of claim 7, wherein uploading the first image information and the at least two second image information to at least one of an alarm system and a traffic management system based on the historical red light running record of the target person comprises:
if the red light running times in the historical red light running records of the target personnel are smaller than a preset time threshold value, uploading the first image information and the at least two second image information to an alarm system;
and if the red light running times in the historical red light running record of the target person are smaller than a preset time threshold value, uploading the first image information and the at least two pieces of second image information to an alarm system and a traffic management system.
9. The utility model provides a pedestrian judges device that makes a dash across red light which characterized in that includes:
the first acquisition module is used for acquiring first image information of a target person in a preset area at one end of a pedestrian crossing in a red light period of a traffic indicator light, wherein the first image information comprises first head image characteristics of the traffic indicator light and the target person;
the second obtaining module is used for tracking and detecting the target person in the red light period, if the position of the target person and the position of the judgment line meet the preset position relationship, judging that the target person runs the red light and obtaining second image information of the target person when the target person is on the judgment line, wherein the second image information comprises at least two traffic indicator lights and second head portrait characteristics of the target person, the number of the second image information corresponds to the number of the judgment lines, and the at least two judgment lines are arranged between a preset area at one end of a pedestrian crossing and a preset area at the other end of the pedestrian crossing at intervals;
and the uploading module is used for uploading the first image information and the at least two second image information to a processing system.
10. An electronic device, comprising: a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for determining a red light violation according to any one of claims 1 to 8 when executing the computer program.
CN201811648609.4A 2018-12-30 2018-12-30 Pedestrian red light running judging method and device and electronic equipment Active CN111383248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811648609.4A CN111383248B (en) 2018-12-30 2018-12-30 Pedestrian red light running judging method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811648609.4A CN111383248B (en) 2018-12-30 2018-12-30 Pedestrian red light running judging method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111383248A true CN111383248A (en) 2020-07-07
CN111383248B CN111383248B (en) 2023-10-13

Family

ID=71220618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811648609.4A Active CN111383248B (en) 2018-12-30 2018-12-30 Pedestrian red light running judging method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111383248B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070004A (en) * 2020-09-07 2020-12-11 北京软通智慧城市科技有限公司 Intelligent city violation supervision method, system, server and storage medium
CN114648888A (en) * 2022-04-15 2022-06-21 山东金宇信息科技集团有限公司 Intelligent traffic management system based on big data
CN115797849A (en) * 2023-02-03 2023-03-14 以萨技术股份有限公司 Data processing system for determining abnormal behaviors based on images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232785A1 (en) * 2015-02-09 2016-08-11 Kevin Sunlin Wang Systems and methods for traffic violation avoidance
CN108376246A (en) * 2018-02-05 2018-08-07 南京蓝泰交通设施有限责任公司 A kind of identification of plurality of human faces and tracking system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160232785A1 (en) * 2015-02-09 2016-08-11 Kevin Sunlin Wang Systems and methods for traffic violation avoidance
CN108376246A (en) * 2018-02-05 2018-08-07 南京蓝泰交通设施有限责任公司 A kind of identification of plurality of human faces and tracking system and method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070004A (en) * 2020-09-07 2020-12-11 北京软通智慧城市科技有限公司 Intelligent city violation supervision method, system, server and storage medium
CN114648888A (en) * 2022-04-15 2022-06-21 山东金宇信息科技集团有限公司 Intelligent traffic management system based on big data
CN115797849A (en) * 2023-02-03 2023-03-14 以萨技术股份有限公司 Data processing system for determining abnormal behaviors based on images

Also Published As

Publication number Publication date
CN111383248B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
CN105493502B (en) Video monitoring method, video monitoring system and computer readable storage medium
JP7355151B2 (en) Information processing device, information processing method, program
CN111126235A (en) Method and device for detecting and processing illegal berthing of ship
CN103021175A (en) Pedestrian red light running video detection method and device based on Davinci architecture
KR102001002B1 (en) Method and system for recognzing license plate based on deep learning
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN108363953B (en) Pedestrian detection method and binocular monitoring equipment
KR102453627B1 (en) Deep Learning based Traffic Flow Analysis Method and System
CN102792314A (en) Cross traffic collision alert system
CN111383248A (en) Method and device for judging red light running of pedestrian and electronic equipment
CN114267082B (en) Bridge side falling behavior identification method based on depth understanding
CN113676702A (en) Target tracking monitoring method, system and device based on video stream and storage medium
CN113674523A (en) Traffic accident analysis method, device and equipment
CN109670431A (en) A kind of behavioral value method and device
CN110147731A (en) Vehicle type recognition method and Related product
CN113658427A (en) Road condition monitoring method, system and equipment based on vision and radar
CN114140767A (en) Road surface detection method, device, equipment and storage medium
CN117132768A (en) License plate and face detection and desensitization method and device, electronic equipment and storage medium
Suttiponpisarn et al. Detection of wrong direction vehicles on two-way traffic
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN116128360A (en) Road traffic congestion level evaluation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant