CN115174889A - Position deviation detection method for camera, electronic device, and storage medium - Google Patents

Position deviation detection method for camera, electronic device, and storage medium Download PDF

Info

Publication number
CN115174889A
CN115174889A CN202210629761.8A CN202210629761A CN115174889A CN 115174889 A CN115174889 A CN 115174889A CN 202210629761 A CN202210629761 A CN 202210629761A CN 115174889 A CN115174889 A CN 115174889A
Authority
CN
China
Prior art keywords
road
information
camera
video data
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210629761.8A
Other languages
Chinese (zh)
Inventor
危春波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210629761.8A priority Critical patent/CN115174889A/en
Publication of CN115174889A publication Critical patent/CN115174889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Abstract

The embodiment of the application provides a camera position deviation detection method, electronic equipment and a storage medium. The method comprises the following steps: acquiring position range information of road associated facilities, wherein the position range information of the road associated facilities is determined in advance based on first road video data collected by a camera; acquiring second road video data acquired by a camera; detecting position information of a moving object based on the second road video data; matching the position information of the movable object with the position range information of the road associated facilities to determine a matching result; generating offset alarm information under the condition that the matching result meets an offset condition; and sending the deviation alarm information to adjust the shooting angle of the camera. The moving object of the road is compared with the static road facilities, whether the interactive object appears at the position of the static road facilities which should not appear is analyzed, the deviation can be found in time and an alarm is given, and the accuracy of the matching result based on the camera picture can be ensured.

Description

Position deviation detection method for camera, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting a position offset of a camera, an electronic device, and a storage medium.
Background
The camera is easy to shift during continuous work (polling) of the preset position, so that the shot picture is changed, and the subsequent processing result based on the shot picture is inaccurate.
Taking a road camera as an example, when a road video picture is not in accordance with expectations, an event judgment rule based on traffic regulations is invalid, so that an event is misreported, if a wrong going lane is arrived, a wrong going is misreported, a wrong motor lane causes pedestrians to walk on a motorway or a motor vehicle occupies an emergency lane, and the like.
Therefore, one technical problem that needs to be solved by those skilled in the art is: and the camera is found to be deviated in time.
Disclosure of Invention
The embodiment of the application provides a method for detecting the position deviation of a camera, and the camera is found to be deviated in time.
Correspondingly, the embodiment of the application also provides electronic equipment and a storage medium, which are used for ensuring the realization and the application of the system.
In order to solve the above problem, an embodiment of the present application discloses a method for detecting a position offset of a camera, where the method includes:
acquiring position range information of road associated facilities, wherein the position range information of the road associated facilities is determined in advance based on first road video data collected by a camera;
acquiring second road video data acquired by a camera;
detecting position information of a moving object based on the second road video data;
matching the position information of the movable object with the position range information of the road associated facilities to determine a matching result;
generating offset alarm information under the condition that the matching result meets an offset condition;
and sending the deviation alarm information to adjust the shooting angle of the camera.
Optionally, the detecting position information of the moving object based on the second road video data includes:
performing target identification based on each frame of road image in the second road video data, and determining at least one target object;
tracking the at least one target object in the second road-based video data, determining at least one moving object;
determining location information of the at least one moving object.
Optionally, the matching the position information of the moving object with the position range information of the road-related facility to determine a matching result includes:
comparing the location information of the moving object with the location range information of the road-related facility;
counting overlapped information in which the position information of the moving object appears in the position range information of the road-related facility;
and generating a matching result according to the overlapping information.
Optionally, the types of the activity object include: a mobile class and a non-mobile class, the roadway association facility comprising: a mobile association class and a non-mobile association class;
the comparing the position information of the moving object with the position range information of the road-related facility includes:
comparing the position information of the maneuvering movable object with the position range information of the non-maneuvering associated road associated facility, and judging whether the maneuvering movable object and the non-maneuvering associated road associated facility are overlapped; and/or the presence of a gas in the gas,
and comparing the position information of the non-maneuvering activity object with the position range information of the maneuvering related road related facility to judge whether the position information and the maneuvering related road related facility are overlapped.
Optionally, the counting overlapping information that the position information of the moving object appears in the position range information of the road-related facility includes:
counting the number of times of overlapping that the position information of the movable object appears in the position range information of the road-related facility within a set time;
and counting the overlapping frequency according to the set time and the overlapping times.
Optionally, in a case that the matching result meets an offset condition, generating offset alarm information includes:
judging whether the overlapping information exceeds an overlapping threshold value;
if the overlap information exceeds the overlap threshold, it is determined that an offset condition is satisfied and offset warning information is generated.
Optionally, the method further includes: analyzing offset information based on the position information of the overlapped moving object and the position range information of the road-related facility, the offset information including: an offset direction and/or an offset angle; adding the offset information to the offset alert information.
Optionally, the method further includes: the position range information of the road-related facilities is determined in advance based on the first road video data collected by the camera.
Optionally, the determining, in advance, the position range information of the road-related facility based on the first road video data collected by the camera includes:
acquiring each frame of road image from the first road video data;
performing semantic segmentation on each frame of road image respectively to determine a segmentation result;
and determining the position range information of the road-related facilities according to the segmentation result.
Optionally, the determining, in advance, the position range information of the road-related facility based on the first road video data collected by the camera includes:
inputting the first road video data into a semantic segmentation model for semantic segmentation, and outputting a segmentation result; and determining the position range information of the road associated facilities according to the segmentation result.
Optionally, the road-related facility includes: roads and road infrastructure; the road comprises a motor vehicle lane, a non-motor vehicle lane and a sidewalk; the asset comprises at least one of: traffic signs, marking lines, pedestrian overpasses, pedestrian underpasses, separation facilities, road display screens, lighting equipment and bus stops; the separation facility comprises at least one of: guardrail, pillar, greenbelt, flower bed.
The embodiment of the application discloses a method for detecting the position offset of a camera, which is characterized by comprising the following steps:
acquiring position range information of a static facility, wherein the position range information of the static facility is determined based on first video data collected by a camera;
acquiring second video data acquired by a camera;
detecting position information of a moving object based on the second video data;
matching the position information of the movable object with the position range information of the static facility to determine a matching result;
generating offset alarm information under the condition that the matching result meets an offset condition;
and sending the deviation alarm information to adjust the shooting angle of the camera.
The embodiment of the application discloses position deviation detection device of camera, its characterized in that, the device includes:
the facility determining module is used for acquiring the position range information of the road associated facility, and the position range information of the road associated facility is determined in advance based on the first road video data collected by the camera;
the position detection module is used for acquiring second road video data acquired by the camera; detecting position information of a moving object based on the second road video data;
the offset detection module is used for matching the position information of the movable object with the position range information of the road associated facility and determining a matching result;
the alarm module is used for generating offset alarm information under the condition that the matching result meets an offset condition; and sending the deviation alarm information to adjust the shooting angle of the camera.
The embodiment of the application also discloses an electronic device, which comprises: a processor; and a memory having executable code stored thereon that, when executed by the processor, performs a method as described in embodiments of the present application.
One or more machine-readable media having stored thereon executable code that, when executed by a processor, performs a method as described in embodiments of the present application are also disclosed.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, the position range information of the road associated facilities can be determined in advance based on the first road video data collected by the camera, then the second road video data collected by the camera is obtained, the position information of the movable object is detected based on the second road video data, then the position information of the movable object is matched with the position range information of the road associated facilities, the matching result is determined, so that the movable object of the road is compared with the static road facilities, whether the interactive object appears at the position of the static road facilities which should not appear is analyzed, the offset alarm information is generated and sent under the condition that the matching result meets the offset condition, the offset alarm information is adjusted, the shooting angle of the camera can be found out in time, the offset can be reported to the police, and the accuracy of the matching result based on the picture of the camera can be ensured.
Drawings
Fig. 1 is a schematic diagram of an example of a road camera and a shooting scene thereof according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating steps of an embodiment of a method for detecting a positional deviation of a camera according to the present disclosure;
FIG. 3 is a flow chart of steps in another embodiment of a method of detecting a positional offset of a camera according to the present application;
FIG. 4 is a flow chart illustrating steps of another embodiment of a method for detecting a positional displacement of a camera according to the present application;
FIG. 5 is a flow chart illustrating steps of another embodiment of a method for detecting a camera position offset according to the present disclosure;
fig. 6 is a schematic structural diagram of an exemplary apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The embodiment of the application can be applied to various scenes shot by the camera, such as road shooting by the road camera, and shooting by the camera in public areas such as markets, hospitals and the like. The shooting angles of a plurality of cameras can be adjusted so as to shoot videos at different angles, and the cameras generally shoot in a polling mode at different angles, namely, the angles are switched at fixed intervals, so that videos at all angles can be shot.
In the shooting process, if the angle of the camera is shifted, for example, the road camera is shifted due to wind power, etc., the picture of the video data shot by the camera is deviated, so that the recognition result based on the picture is problematic. Therefore, the embodiment of the application can judge whether the camera has deviation or not based on the positions of the static object and the movable object, and early warning is carried out when the camera deviates.
Referring to fig. 1, a schematic diagram of an example of a road camera and a shooting scene thereof is shown. The road is a six-lane road and is provided with a pedestrian crossing, and the pedestrian crossing is shot by a camera of a ball machine type road. The dotted line is the shooting range at the correct angle of the camera. Camera offset detection as shown in fig. 2:
step 202, obtaining position range information of road associated facilities, wherein the position range information of the road associated facilities is determined in advance based on first road video data collected by a camera.
The embodiment of the application judges whether the camera has the offset or not based on the positions of the static object and the movable object. Therefore, the road video data of the camera can be collected in advance, and can be called as first road video data. The first road video data can be road video data collected by a camera under the condition of good imaging conditions, namely the road video data collected by the camera can be screened, and the road video data with the imaging conditions meeting the detection requirements is determined to be used as the first road video data, wherein the detection requirements can be determined based on various conditions such as illumination, shielding and the like.
The first roadway video data can then be analyzed to determine location range information for roadway related assets. The first road video data can be analyzed, road associated facilities in the video data can be determined through various modes such as target recognition and semantic segmentation, and the position range of the road associated facilities can be determined.
Wherein the road-related facility includes: roads and road infrastructure; the road comprises a motor vehicle lane, a non-motor vehicle lane and a sidewalk; the asset comprises at least one of: traffic signs, marking lines, pedestrian overpasses, pedestrian underpasses, separation facilities, road display screens, lighting equipment and bus stops; the separation facility comprises at least one of: guardrail, pillar, greenbelt, flower bed.
In an optional embodiment, the determining the position range information of the road-related facility in advance based on the first road video data collected by the camera includes: acquiring each frame of road image from the first road video data; performing semantic segmentation on each frame of road image respectively to determine a segmentation result; and determining the position range information of the road associated facilities according to the segmentation result. The method comprises the steps of sequentially acquiring each frame of road image from first road video data, performing semantic segmentation on each frame of road image, predicting semantic type of each pixel point, determining segmentation results, determining which road-related facilities the segmentation targets belong to based on the segmentation results, determining positions of the segmentation targets and the like, and determining position ranges of the segmentation targets.
The semantic segmentation can be realized through a semantic segmentation model based on deep learning, the first road video data can be input into the semantic segmentation model for semantic segmentation, and a segmentation result is output; and determining the position range information of the road associated facilities according to the segmentation result. The first road video data is input into a semantic segmentation model, the semantic segmentation model can respectively carry out semantic segmentation on each frame of road image, segmentation results are output, and the position range information of each road related facility can be determined based on the segmentation results. Among them, the Image Semantic Segmentation (Semantic Image Segmentation) is an important ring of Image understanding (Image understanding) in Computer vision (Computer vision). According to the embodiment of the application, various deep learning network models can be applied to semantic segmentation tasks, and segmentation of road associated facilities is achieved. For example, FCN (full volume Network) algorithm, segNet algorithm, deep lab V3+ algorithm, etc. are used. Thus, each road-related facility can be determined based on semantic segmentation, and the position range information of the road-related facility can be determined.
In the embodiment of the application, the camera can move among a plurality of angles so as to adjust the shooting range, so that one angle can be determined as a reference angle, the position range information of the road-related facility corresponding to the reference angle can be determined as a reference, and the images shot at other angles can be corrected to the reference angle so as to be matched with the position range information of the road-related facility corresponding to the reference angle. Of course, in other embodiments, each angle may also correspond to location range information of one road-related facility, which is not limited in this application.
And step 204, acquiring second road video data acquired by the camera.
After the position range information of the road associated facility is determined, the angle of the camera can be detected in real time or periodically, that is, the second road video data collected by the camera can be obtained in real time, or the second road video data collected by the camera can be detected periodically, and the detection period can be shorter than the angle adjustment period of the camera, so that whether the angle is accurate or not is detected at least once in each angle adjustment period.
In the embodiment of the application, road video data collected by the same camera is divided into a first road video data and a second road video data according to scenes, wherein the preprocessed road video data is called as the first road video data, and the detected road video data is called as the second road video data. The second road video data can be road video data collected at any time under any imaging condition, and because the static road associated facilities analyze position information in the preprocessing, even if the second road video data is the conditions that the illumination is greatly changed (such as night, cloudy day and the like) or serious shielding exists, the second road video data can still be used for detecting moving objects, and all-weather detection all day long is realized.
And step 206, detecting the position information of the movable object based on the second road video data.
The second road video data may be subjected to target detection, and the moving object and the position information of the moving object may be determined, where the position information may be a position information set of a position point of the moving object.
In an optional embodiment, the detecting the position information of the moving object based on the second road video data includes: performing target identification based on each frame of road image in the second road video data, and determining at least one target object; tracking the at least one target object in the second road-based video data, determining at least one moving object; determining location information of the at least one moving object. The target recognition can be performed on the basis of each frame of road image in the second road video data, at least one target object is determined, and then the at least one target object is tracked in the second road video data, and at least one movable object is determined; determining location information of the at least one moving object. In the road scene, the moving objects of the road comprise vehicles and pedestrians, wherein the vehicles comprise motor vehicles and non-motor vehicles, the non-motor vehicles and the pedestrians both belong to non-motor type moving objects, and the motor vehicles belong to motor type moving objects.
The detection aiming at the moving object can be based on a deep learning method to carry out target detection, such as traffic target detection and the like, and can be realized based on various deep learning algorithms, such as target detection and identification algorithms based on regional suggestions, such as R-CNN, fast-R-CNN, faster-R-CNN and the like, and target detection and identification algorithms based on regression, such as YOLO, SSD and the like. The second road video data may be input into a deeply learned object detection model, thereby enabling recognition and tracking of moving objects, determining position information of at least one moving object.
And step 208, matching the position information of the movable object with the position range information of the road-related facility to determine a matching result.
The matching result may be determined by matching the position information of the moving object with the position range information of the road-related facility, determining whether the positions of the moving object and the road-related facility overlap, and determining whether the moving object is present at a position where the moving object should not be present. Wherein the matching the position information of the moving object with the position range information of the road-related facility to determine a matching result includes: comparing the position information of the moving object with the position range information of the road-related facility, and counting the overlapping information of the position information of the moving object appearing in the position range information of the road-related facility; and generating a matching result according to the overlapping information. And comparing the position information of the moving object with the position range information of the road associated facilities to determine whether the position information of the moving object is overlapped with the position range information of the repellent road associated facilities, wherein the road associated facilities repelled by the moving object are facilities which the moving object should not move, such as pedestrians should not move on the positions of a motorway, a flower bed, a green belt and the like, and vehicles should not move on the positions of a non-motorway, the flower bed, the green belt and the like, and the overlapping information, such as the overlapping times, the overlapping frequency, the overlapping duration and the like can be counted as a matching result.
In an optional embodiment, the counting of the overlapping information that the position information of the moving object appears in the position range information of the road-related facility includes: counting the number of times of overlapping that the position information of the movable object appears in the position range information of the road-related facility within a set time; and counting the overlapping frequency according to the set time and the overlapping times. According to the embodiment of the application, the second video data with set time can be collected, for example, the video data collected in real time can be accumulated to the set time to obtain the matching result. The number of times of overlapping with the road-related facility, that is, the number of times of overlapping when the position information of the moving object appears in the position range information of the road-related facility, may be notified for each moving object within the set time, and then the overlapping frequency may be counted based on the set time and the number of times of overlapping. Accordingly, each active object may correspond to a number of overlaps.
In the embodiment of the application, in order to avoid the influence of individual noise and other conditions on the matching accuracy, the overlapping frequency of each moving object can be analyzed and filtered. For example, when the number of the active objects whose overlapping frequencies exceed the matching threshold exceeds the number threshold, the overlapping frequencies of the active objects may be added to the matching result. If the number threshold is not exceeded, then some moving objects are considered noise, such as noise caused by an individual crossing a road, etc., and the noisy data can be discarded to obtain a matching result.
In the embodiment of the present application, the detected overlap is directed to an overlap that should not occur, such as a pedestrian walking to a flower bed or a motorway, or a vehicle driving on a flower bed and a pedestrian road, so that the moving object is compared with the repellent road-related facilities. The types of the active objects may include: a mobile class and a non-mobile class, the roadway association facility comprising: a maneuver Association class and a non-maneuver Association class. The motor-related facilities are road-related facilities related to normal running of the motor vehicle, such as a motor-driven lane and traffic signs thereof, separation facilities, a road display screen and the like, and the non-motor-related facilities are road-related facilities, such as a non-motor-driven lane and traffic signs thereof, separation facilities, sidewalks and traffic signs thereof, pedestrian overpasses, pedestrian underpasses, lighting equipment, bus stops and the like.
In an optional embodiment, the comparing the position information of the moving object with the position range information of the road-related facility includes: comparing the position information of the maneuvering movable object with the position range information of the non-maneuvering associated road associated facilities, and judging whether the maneuvering movable object and the non-maneuvering associated road associated facilities are overlapped; and/or comparing the position information of the non-maneuvering activity object with the position range information of the maneuvering related road related facility to judge whether the position information and the maneuvering related road related facility overlap or not. The position information of the motor type moving object is compared with the position range information of the non-motor type road related facilities, whether the motor type moving object and the non-motor type moving object are overlapped or not is judged, whether the motor type moving object such as a vehicle runs on a flower bed, a pedestrian way or other positions can be detected, whether the position information of the non-motor type moving object and the position range information of the motor type road related facilities are overlapped or not is judged, and whether the non-motor type moving object such as a pedestrian or a non-motor vehicle moves on the flower bed, the motor way or other positions can be detected. In order to exclude interference of some abnormal factors such as pedestrians not walking the pedestrian lane but crossing the road at will, etc., the number of overlaps, the frequency of the overlaps, the length of the overlap, etc. may be determined based on the video data of the second road over a period of time. And may be detected based on overlapping information of multiple active objects.
And step 210, generating offset alarm information under the condition that the matching result meets an offset condition.
Whether an offset condition is satisfied can be determined based on the matching result, and in the case that the matching result satisfies the offset condition, offset alarm information is generated. Wherein, under the condition that the matching result meets the offset condition, generating offset alarm information, including: judging whether the overlapping information exceeds an overlapping threshold value; and if the overlapping information exceeds the overlapping threshold value, determining that the offset condition is met and generating offset alarm information. Whether the overlapping information exceeds the overlapping threshold value can be judged, for example, whether the overlapping times exceeds the times threshold value, whether the overlapping frequency exceeds the frequency threshold value, whether the overlapping duration exceeds the duration threshold value, and the like are judged, and if the overlapping information exceeds the overlapping threshold value, the condition that the offset is met is determined. In still other examples, it may also be determined that the shift condition is satisfied based on overlap information of a plurality of moving objects exceeding an overlap threshold, and then shift alert information may be generated.
Wherein, the camera identification and the like of the camera in which the deviation occurs can be added to the deviation alarm information, so that the camera in which the deviation occurs can be determined based on the deviation alarm information. Offset information may also be analyzed based on the position information of the overlapped moving object and the position range information of the road-related facility, the offset information including: offset direction and/or offset angle; adding the offset information to the offset alert information. The offset information may be analyzed based on the position information of the overlapped moving object and the position range information of the road-related facility, such as analyzing the direction of the offset, i.e., comparing the position information of the overlapped moving object with the position range information of the road-related facility, including comparing the related type and the repulsive type of road-related facility, to determine the offset position where the moving object is located, and the normal (non-offset case) position, to detect the direction of the offset, the angle of the offset, and the like.
In the example shown in fig. 1, it is found through detection that the normal shooting range of the camera is a range corresponding to a dot-dash line, and the actual shooting range of this time is a range corresponding to a dotted line, so that it is detected that both the vehicle and the pedestrian are offset, the vehicle moves at the boundary between the vehicle and the sidewalk, and the pedestrian moves upward at a short distance from the vehicle to the sidewalk. Accordingly, the left (on the figure) deviation of the camera and the like can be analyzed based on the deviated vehicle and pedestrian. Offset alert information may be generated.
And 212, sending the deviation alarm information to adjust the shooting angle of the camera.
The offset alarm information can be sent, for example, the offset alarm information is sent to equipment of a management department of the camera, server side equipment and the like, so that the shooting angle of the camera can be adjusted in time.
In summary, the position range information of the road-related facility may be determined in advance based on the first road video data collected by the camera, then the second road video data collected by the camera is obtained, the position information of the moving object is detected based on the second road video data, then the position information of the moving object is matched with the position range information of the road-related facility, and a matching result is determined, so that the moving object of the road is compared with the static road facility, whether the interactive object appears at the position of the static road facility which should not appear is analyzed, and therefore, under the condition that the matching result meets the offset condition, offset alarm information is generated, the offset alarm information is sent, the shooting angle of the camera is adjusted, the offset can be found in time and an alarm is given, and the accuracy of the matching result based on the camera picture can be ensured.
In the embodiment of the application, the control end of the camera can execute the following processing steps: periodically indicating a camera to adjust a shooting angle; receiving offset alarm information; and adjusting the shooting angle according to the deviation alarm information.
In the working process of the camera, the control end of the camera, such as a central control end, a controller of the camera and the like, can periodically instruct the camera to adjust the shooting angle. If the shooting angle of the camera deviates due to various reasons, the detection can be carried out at the detection end in the manner described above, and then deviation alarm information is sent. And the control end receives the deviation alarm information. The shooting angle can be adjusted based on the deviation alarm information, for example, the deviation of the camera position is determined based on the deviation alarm information, the deviation angle can be automatically detected and adjusted, or the deviation angle can be determined and adjusted based on the shot image and the like, and the deviation information such as the deviation direction, the deviation angle and the like can be obtained from the deviation alarm information, so that the shooting angle of the camera can be adjusted.
On the basis of the above embodiments, the present application further provides a camera position deviation detection method, which can analyze the position range information of the road-related facility in advance.
Referring to FIG. 3, a flow chart of steps of another camera position offset detection method pre-processing embodiment of the present application is shown.
Step 302, collecting the first road video data with the imaging condition meeting the detection requirement through a camera.
And 304, acquiring each frame of road image from the first road video data.
And step 306, performing semantic segmentation on each frame of road image, and determining a segmentation result.
The first road video data can be input into the semantic segmentation model for processing, so that semantic segmentation is respectively carried out on the basis of each frame of road image, and a corresponding segmentation result is output.
And 308, determining the position range information of the road-related facilities according to the segmentation result.
In the preprocessing process, some static road associated facilities can be detected, so that road video data collected under the condition of better imaging conditions can be selected for analysis, and the road associated facilities such as roads and road facilities are determined based on deep learning semantic segmentation; the road comprises a motor vehicle lane, a non-motor vehicle lane and a sidewalk; the asset comprises at least one of: traffic signs, marking lines, pedestrian overpasses, pedestrian underpasses, separation facilities, road display screens, lighting equipment and bus stops; the separation facility comprises at least one of: guardrail, pillar, greenbelt, flower bed.
According to the embodiment of the application, static objects such as road associated facilities and the like can be detected in advance under the condition of good imaging conditions, and the subsequent matching is conveniently carried out by taking the static objects as a reference. Compared with the existing mode of storing a plurality of scene graphs for image matching, the detection method is simpler and has high efficiency. And no other hardware such as a customized cradle head and the like is needed, and no extra hardware cost exists.
On the basis of the above embodiments, the embodiments of the present application further provide a method for detecting a position deviation of a camera, which can detect the position deviation of the camera in time.
Referring to FIG. 4, a flow chart of steps of another camera position offset detection method pre-processing embodiment of the present application is shown.
And 402, acquiring second road video data acquired by the camera.
And step 404, performing target identification based on each frame of road image in the second road video data, and determining at least one target object.
Step 406, tracking the at least one target object in the second road-based video data, and determining at least one moving object.
In step 408, position information of the at least one moving object is determined.
And step 410, comparing the position information of the maneuvering movable object with the position range information of the non-maneuvering associated road-related facilities, and judging whether the maneuvering movable object and the non-maneuvering associated road-related facilities are overlapped.
If so, go to step 414, and if not, continue to check and go to step 410.
In step 412, the position information of the non-maneuvering-related movable object is compared with the position range information of the maneuvering-related road-related facility, and whether the position information and the range information overlap is judged.
If so, go to step 414, and if not, go to step 412.
Step 414, counting the overlapping information of the position information of the movable object appearing in the position range information of the road-related facility.
Step 416, generate a matching result according to the overlapping information.
In step 418, it is determined whether the overlap information exceeds an overlap threshold.
If yes, go to step 420, otherwise, end the process.
And if the overlapping information exceeds the overlapping threshold value, determining that the offset condition is met and generating offset alarm information.
Step 420, analyzing offset information based on the overlapped position information of the movable object and the position range information of the road-related facility, wherein the offset information comprises: offset direction and/or offset angle.
And 422, adding the offset information into the offset alarm information to generate offset alarm information.
The embodiment of the application can be divided based on roads, people and vehicles can be detected, all-weather camera such as a dome camera can be detected in all weather around the day in traffic scenes, operation is simple, matched pictures in all scenes are not needed, and cost is low without changing or increasing hardware.
The embodiment of the application detects the position of the static road associated facility in advance, so that all-weather detection can be carried out all day later, and the method is still suitable for large change or serious shielding of illumination.
The above embodiments take a road camera as an example, and the practical processing can also be applied to other scenes shot by the camera.
Referring to FIG. 5, a flow chart of steps of another camera position offset detection method pre-processing embodiment of the present application is shown.
Step 502, obtaining position range information of a static facility, wherein the position range information of the static facility is determined based on first video data collected by a camera.
The determining the position range information of the static facility in advance based on the first video data collected by the camera comprises the following steps: acquiring each frame of video image from the first video data; performing semantic segmentation on each frame of video image respectively to determine a segmentation result; and determining static facility position range information according to the segmentation result.
Wherein, static facility can be based on the scene determination that the camera was shot, like in the indoor scene, static facility can be for facilities such as wall, storing equipment.
And step 504, acquiring second video data acquired by the camera.
Step 506, based on the second video data, detecting position information of the moving object.
The detecting position information of the moving object based on the second video data comprises: performing target identification based on each frame of video image in the second video data, and determining at least one target object; tracking the at least one target object in the second video data based on determining at least one moving object; determining location information of the at least one moving object.
And step 508, matching the position information of the movable object with the position range information of the static facility, and determining a matching result.
The matching the position information of the moving object with the position range information of the static facility and determining a matching result includes: comparing the location information of the moving object with the location range information of the static facility; counting overlapped information of the position information of the movable object appearing in the position range information of the static facility; and generating a matching result according to the overlapping information.
And 510, generating offset alarm information under the condition that the matching result meets an offset condition.
And generating offset alarm information under the condition that the matching result meets an offset condition, wherein the offset alarm information comprises the following steps: judging whether the overlapping information exceeds an overlapping threshold value; if the overlap information exceeds the overlap threshold, it is determined that an offset condition is satisfied and offset warning information is generated.
Analyzing offset information based on the overlapping location information of the moving object and the location range information of the static facility, the offset information including: offset direction and/or offset angle; adding the offset information to the offset alert information.
And step 512, sending the deviation alarm information to adjust the shooting angle of the camera.
The embodiment of the application can be used for segmenting static facilities based on specific scenes and executing moving object detection, can realize all-weather camera such as a dome camera and other preset position offset detection all day long, is simple to operate, does not need matching pictures under each scene, and is low in cost and does not need changing or adding hardware.
In the embodiments of the application, if user information is involved, the user information is collected, used and stored after being authorized and allowed by a user, and various operations based on the user information are executed after being authorized and allowed by the user.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
On the basis of the above embodiments, the present embodiment further provides a device for detecting a position deviation of a camera, which is applied to an electronic device, such as an electronic device at a server.
The facility determining module is used for acquiring the position range information of the road associated facility, and the position range information of the road associated facility is determined in advance based on the first road video data collected by the camera;
the position detection module is used for acquiring second road video data acquired by the camera; detecting position information of a moving object based on the second road video data;
the offset detection module is used for matching the position information of the movable object with the position range information of the road associated facility and determining a matching result;
the alarm module is used for generating offset alarm information under the condition that the matching result meets an offset condition; and sending the deviation alarm information to adjust the shooting angle of the camera.
In summary, the position range information of the road-related facility may be determined in advance based on the first road video data collected by the camera, then the second road video data collected by the camera is obtained, the position information of the moving object is detected based on the second road video data, then the position information of the moving object is matched with the position range information of the road-related facility, and a matching result is determined, so that the moving object of the road is compared with the static road facility, whether the interactive object appears at the position of the static road facility which should not appear is analyzed, and therefore, under the condition that the matching result meets the offset condition, offset alarm information is generated, the offset alarm information is sent, the shooting angle of the camera is adjusted, the offset can be found in time and an alarm is given, and the accuracy of the matching result based on the camera picture can be ensured.
The position detection module is used for carrying out target identification on the basis of each frame of road image in the second road video data and determining at least one target object; tracking the at least one target object in the second road-based video data, determining at least one moving object; determining location information of the at least one moving object.
The offset detection module is used for comparing the position information of the movable object with the position range information of the road-related facility; counting overlapped information in which the position information of the moving object appears in the position range information of the road-related facility; and generating a matching result according to the overlapping information.
Wherein the types of the active objects include: a mobile class and a non-mobile class, the roadway association facility comprising: a mobile association class and a non-mobile association class; the deviation detection module is used for comparing the position information of the maneuvering movable object with the position range information of the non-maneuvering associated road associated facilities and judging whether the maneuvering movable object and the non-maneuvering associated road associated facilities are overlapped or not; and/or comparing the position information of the non-maneuvering movable object with the position range information of the maneuvering associated road-related facility to judge whether the position information and the position range information are overlapped.
The alarm module is used for judging whether the overlapping information exceeds an overlapping threshold value; and if the overlapping information exceeds the overlapping threshold value, determining that the offset condition is met and generating offset alarm information.
The alarm module is further configured to analyze offset information based on the overlapped position information of the moving object and the position range information of the road-related facility, where the offset information includes: offset direction and/or offset angle;
adding the offset information to the offset alert information.
The facility determining module is used for acquiring each frame of road image from the first road video data; performing semantic segmentation on each frame of road image respectively to determine a segmentation result; and determining the position range information of the road associated facilities according to the segmentation result.
The road-related facility includes: roads and road infrastructure; the road comprises a motor vehicle lane, a non-motor vehicle lane and a sidewalk; the asset comprises at least one of: traffic signs, marking lines, pedestrian overpasses, pedestrian underpasses, separation facilities, road display screens, lighting equipment and bus stops; the separation facility comprises at least one of: guardrail, pillar, greenbelt, flower bed.
The embodiment of the application can be used for segmenting based on roads and executing human and vehicle detection, can realize all-weather camera such as a dome camera and other preset position offset detection all day long under traffic scenes, is simple to operate, does not need to match pictures under each scene, and is low in cost and does not need to change or add hardware.
The embodiment of the application detects the position of the static road associated facility in advance, so that all-weather detection can be carried out all day later, and the method is still suitable for large change or serious shielding of illumination.
On the basis of the above embodiments, the present embodiment further provides another apparatus for detecting a position deviation of a camera, which is applied to electronic devices, such as electronic devices at a server.
The system comprises a preprocessing module, a video acquisition module and a video processing module, wherein the preprocessing module is used for acquiring the position range information of a static facility, and the position range information of the static facility is determined based on first video data acquired by a camera;
the activity detection module is used for acquiring second video data acquired by the camera; detecting position information of a moving object based on the second video data;
the offset analysis module is used for matching the position information of the movable object with the position range information of the static facility and determining a matching result;
the offset alarm module is used for generating offset alarm information under the condition that the matching result meets an offset condition; and sending the deviation alarm information to adjust the shooting angle of the camera.
In the embodiments of the application, if user information is involved, the user information is collected, used and stored after being authorized and allowed by a user, and various operations based on the user information are executed after being authorized and allowed by the user.
The present application further provides a non-transitory, readable storage medium, where one or more modules (programs) are stored, and when the one or more modules are applied to a device, the device may execute instructions (instructions) of method steps in this application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an electronic device to perform the methods as described in one or more of the above embodiments. In the embodiment of the application, the electronic device includes a server, a terminal device and other devices.
Embodiments of the present disclosure may be implemented as an apparatus, which may include servers (clusters), terminals, etc. electronic devices, using any suitable hardware, firmware, software, or any combination thereof, for a desired configuration. Fig. 6 schematically illustrates an example apparatus 600 that may be used to implement various embodiments described herein.
For one embodiment, fig. 6 illustrates an exemplary apparatus 600 having one or more processors 602, a control module (chipset) 604 coupled to at least one of the processor(s) 602, a memory 606 coupled to the control module 604, a non-volatile memory (NVM)/storage 608 coupled to the control module 604, one or more input/output devices 610 coupled to the control module 604, and a network interface 612 coupled to the control module 604.
The processor 602 may include one or more single-core or multi-core processors, and the processor 602 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 600 can be used as a server, a terminal, or the like in the embodiments of the present application.
In some embodiments, apparatus 600 may include one or more computer-readable media (e.g., memory 606 or NVM/storage 608) having instructions 614 and one or more processors 602, which in combination with the one or more computer-readable media are configured to execute instructions 614 to implement modules to perform the actions described in this disclosure.
For one embodiment, control module 604 may include any suitable interface controllers to provide for any suitable interface to at least one of processor(s) 602 and/or to any suitable device or component in communication with control module 604.
Control module 604 may include a memory controller module to provide an interface to memory 606. The memory controller module may be a hardware module, a software module, and/or a firmware module.
Memory 606 may be used, for example, to load and store data and/or instructions 614 for device 600. For one embodiment, memory 606 may comprise any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 606 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, control module 604 may include one or more input/output controllers to provide an interface to NVM/storage 608 and input/output device(s) 610.
For example, NVM/storage 608 may be used to store data and/or instructions 614. NVM/storage 608 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 608 may include storage resources that are part of a device on which apparatus 600 is installed or it may be accessible by the device and may not be necessary as part of the device. For example, NVM/storage 608 may be accessible over a network via input/output device(s) 610.
Input/output device(s) 610 may provide an interface for apparatus 600 to communicate with any other suitable device, input/output devices 610 may include communication components, audio components, sensor components, and so forth. The network interface 612 may provide an interface for the device 600 to communicate over one or more networks, and the device 600 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 602 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of the control module 604. For one embodiment, at least one of the processor(s) 602 may be packaged together with logic for one or more controller(s) of the control module 604 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 602 may be integrated on the same die with logic for one or more controller(s) of the control module 604. For one embodiment, at least one of the processor(s) 602 may be integrated on the same die with logic of one or more controllers of the control module 604 to form a system on a chip (SoC).
In various embodiments, the apparatus 600 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, apparatus 600 may have more or fewer components and/or different architectures. For example, in some embodiments, device 600 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
An embodiment of the present application further provides an electronic device, including: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in one or more of the embodiments of the application. In the embodiment of the present application, various data, such as various data of a target file, a file and application associated data, and the like, may be stored in the memory, and user behavior data may also be included, so as to provide a data basis for various processing.
Embodiments of the present application also provide one or more machine-readable media having executable code stored thereon that, when executed, cause a processor to perform a method as described in one or more of the embodiments of the present application.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or terminal device that comprises the element.
The method and the device for detecting the position deviation of the camera, the electronic device and the storage medium provided by the present application are introduced in detail, and specific examples are applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. A positional displacement detection method of a camera, characterized by comprising:
acquiring position range information of road associated facilities, wherein the position range information of the road associated facilities is determined in advance based on first road video data acquired by a camera;
acquiring second road video data acquired by a camera;
detecting position information of a moving object based on the second road video data;
matching the position information of the movable object with the position range information of the road associated facilities to determine a matching result;
generating offset alarm information under the condition that the matching result meets an offset condition;
and sending the deviation alarm information to adjust the shooting angle of the camera.
2. The method of claim 1, wherein detecting location information of a moving object based on the second road video data comprises:
performing target identification based on each frame of road image in the second road video data, and determining at least one target object;
tracking the at least one target object in the second road-based video data, determining at least one moving object;
determining location information of the at least one moving object.
3. The method according to claim 1, wherein the matching the position information of the moving object with the position range information of the road-related facility, and determining a matching result comprises:
comparing the location information of the moving object with the location range information of the road-related facility;
counting overlapped information in which the position information of the moving object appears in the position range information of the road-related facility;
and generating a matching result according to the overlapping information.
4. The method of claim 3, wherein the type of the active object comprises: motorized and non-motorized, the road-associated facility comprising: a mobile association class and a non-mobile association class;
the comparing the location information of the moving object with the location range information of the road-associated facility includes:
comparing the position information of the maneuvering movable object with the position range information of the non-maneuvering associated road associated facility, and judging whether the maneuvering movable object and the non-maneuvering associated road associated facility are overlapped; and/or the presence of a gas in the gas,
and comparing the position information of the non-maneuvering activity object with the position range information of the maneuvering related road related facility to judge whether the position information and the maneuvering related road related facility are overlapped.
5. The method according to claim 3 or 4, wherein the counting of the overlapping information in which the position information of the moving object appears in the position range information of the road-related facility includes:
counting the number of times of overlapping that the position information of the movable object appears in the position range information of the road-related facility within a set time;
and counting the overlapping frequency according to the set time and the overlapping times.
6. The method according to claim 5, wherein in case that the matching result satisfies an offset condition, generating offset alarm information includes:
judging whether the overlapping information exceeds an overlapping threshold value;
if the overlap information exceeds the overlap threshold, it is determined that an offset condition is satisfied and offset warning information is generated.
7. The method of claim 6, further comprising:
analyzing offset information based on the position information of the overlapped moving object and the position range information of the road-related facility, the offset information including: an offset direction and/or an offset angle;
adding the offset information to the offset alert information.
8. The method of claim 1, further comprising: the method comprises the following steps of determining position range information of road related facilities based on first road video data collected by a camera in advance:
acquiring each frame of road image from the first road video data;
performing semantic segmentation on each frame of road image respectively to determine a segmentation result;
and determining the position range information of the road associated facilities according to the segmentation result.
9. The method of claim 1, further comprising: the step of determining the position range information of the road-related facility based on the first road video data collected by the camera in advance comprises the following steps:
inputting the first road video data into a semantic segmentation model for semantic segmentation, and outputting a segmentation result;
and determining the position range information of the road associated facilities according to the segmentation result.
10. The method according to any of claims 1-4, wherein the road-related infrastructure comprises: roads and road infrastructure; the road comprises a motor vehicle lane, a non-motor vehicle lane and a sidewalk; the asset comprises at least one of: traffic signs, marking lines, pedestrian overpasses, pedestrian underpasses, separation facilities, road display screens, lighting equipment and bus stops; the separation facility comprises at least one of: guardrail, pillar, greenbelt, flower bed.
11. A positional displacement detection method of a camera, characterized by comprising:
acquiring position range information of a static facility, wherein the position range information of the static facility is determined based on first video data acquired by a camera;
acquiring second video data acquired by a camera;
detecting position information of a moving object based on the second video data;
matching the position information of the movable object with the position range information of the static facility to determine a matching result;
generating offset alarm information under the condition that the matching result meets an offset condition;
and sending the deviation alarm information to adjust the shooting angle of the camera.
12. A positional displacement detection apparatus of a camera, characterized by comprising:
the facility determining module is used for acquiring the position range information of the road associated facility, and the position range information of the road associated facility is determined in advance based on the first road video data collected by the camera;
the position detection module is used for acquiring second road video data acquired by the camera; detecting position information of a moving object based on the second road video data;
the offset detection module is used for matching the position information of the movable object with the position range information of the road associated facility and determining a matching result;
the alarm module is used for generating offset alarm information under the condition that the matching result meets an offset condition; and sending the deviation alarm information to adjust the shooting angle of the camera.
13. An electronic device, comprising: a processor;
and a memory having executable code stored thereon which, when executed by the processor, performs the method of any one of claims 1-11.
14. One or more machine-readable media having executable code stored thereon that, when executed by a processor, performs the method of any of claims 1-11.
CN202210629761.8A 2022-06-06 2022-06-06 Position deviation detection method for camera, electronic device, and storage medium Pending CN115174889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210629761.8A CN115174889A (en) 2022-06-06 2022-06-06 Position deviation detection method for camera, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210629761.8A CN115174889A (en) 2022-06-06 2022-06-06 Position deviation detection method for camera, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN115174889A true CN115174889A (en) 2022-10-11

Family

ID=83485101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210629761.8A Pending CN115174889A (en) 2022-06-06 2022-06-06 Position deviation detection method for camera, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN115174889A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116392369A (en) * 2023-06-08 2023-07-07 中国电建集团昆明勘测设计研究院有限公司 Identification induction method, device, equipment and storage medium based on blind sidewalk

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116392369A (en) * 2023-06-08 2023-07-07 中国电建集团昆明勘测设计研究院有限公司 Identification induction method, device, equipment and storage medium based on blind sidewalk
CN116392369B (en) * 2023-06-08 2023-09-08 中国电建集团昆明勘测设计研究院有限公司 Identification induction method, device, equipment and storage medium based on blind sidewalk

Similar Documents

Publication Publication Date Title
WO2018223955A1 (en) Target monitoring method, target monitoring device, camera and computer readable medium
KR102385280B1 (en) Camera system and method for contextually capturing the surrounding area of a vehicle
KR20180046798A (en) Method and apparatus for real time traffic information provision
CN105493502A (en) Video monitoring method, video monitoring system, and computer program product
CN111932901B (en) Road vehicle tracking detection apparatus, method and storage medium
WO2013186662A1 (en) Multi-cue object detection and analysis
EP2709066A1 (en) Concept for detecting a motion of a moving object
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
CN102426785A (en) Traffic flow information perception method based on contour and local characteristic point and system thereof
US20200143177A1 (en) Systems and methods of detecting moving obstacles
US20220253634A1 (en) Event-based vehicle pose estimation using monochromatic imaging
CN112351190A (en) Digital twin monitoring system and method
CN110225236B (en) Method and device for configuring parameters for video monitoring system and video monitoring system
Nguyen et al. Real-time validation of vision-based over-height vehicle detection system
CN113111682A (en) Target object sensing method and device, sensing base station and sensing system
CN115174889A (en) Position deviation detection method for camera, electronic device, and storage medium
Chen et al. Vision-based road bump detection using a front-mounted car camcorder
WO2018209470A1 (en) License plate identification method and system
CN113112813B (en) Illegal parking detection method and device
CN113066306B (en) Management method and device for roadside parking
Dinh et al. Development of a tracking-based system for automated traffic data collection for roundabouts
KR101263894B1 (en) Apparatus and method for tracking wanted vehicle
US11288519B2 (en) Object counting and classification for image processing
KR20180068462A (en) Traffic Light Control System and Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination