CN111540023B - Monitoring method and device of image acquisition equipment, electronic equipment and storage medium - Google Patents
Monitoring method and device of image acquisition equipment, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111540023B CN111540023B CN202010413763.4A CN202010413763A CN111540023B CN 111540023 B CN111540023 B CN 111540023B CN 202010413763 A CN202010413763 A CN 202010413763A CN 111540023 B CN111540023 B CN 111540023B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- determining
- video image
- line
- offset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012544 monitoring process Methods 0.000 title claims abstract description 23
- 230000015654 memory Effects 0.000 claims description 19
- 230000003068 static effect Effects 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 11
- 238000012806 monitoring device Methods 0.000 claims description 5
- 229920003169 water-soluble polymer Polymers 0.000 claims 1
- 239000003550 marker Substances 0.000 abstract description 18
- 238000001514 detection method Methods 0.000 abstract description 11
- 230000008859 change Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The application discloses a monitoring method and device of image acquisition equipment, electronic equipment and a storage medium, and relates to the field of computer vision. The specific implementation scheme of the monitoring method of the image acquisition equipment is as follows: determining the stop position of each target vehicle from a first video image acquired by image acquisition equipment; determining a sign line in the first video image according to the stop position of each target vehicle; and determining the offset of the mark line in the first video image relative to the reference mark line, and determining that the image acquisition equipment is offset when the offset reaches a preset condition. According to the scheme, the stop position of the vehicle is used for determining the mark line, the mark line in the first video image is compared with the reference mark line, and the offset condition of the image acquisition equipment is determined according to the comparison result. Therefore, the workload of manual detection is reduced, and the detection accuracy can be improved by comparing the marker lines.
Description
Technical Field
The present application relates to the field of computer vision, and in particular, to a monitoring method and apparatus for an image capturing device, an electronic device, and a storage medium.
Background
In a video monitoring scene, the position and the orientation angle of the image acquisition equipment are determined by measuring and calculating in advance, so that a video image acquired by the image acquisition equipment has a good monitoring view.
In the case that the position or orientation angle of the image acquisition device is shifted due to an unexpected situation such as an external force, monitoring is affected. In the prior art, the deviation of the image acquisition equipment needs to be monitored by workers regularly. However, manual detection is inefficient, and the lack of a reference object results in poor monitoring accuracy.
Disclosure of Invention
The embodiment of the application provides a monitoring method and device for image acquisition equipment, electronic equipment and a storage medium, and aims to solve one or more technical problems in the prior art.
In a first aspect, the present application provides a method for monitoring an image capturing device, the method comprising:
determining the stop position of each target vehicle from a first video image acquired by image acquisition equipment;
determining a sign line in the first video image according to the stop position of each target vehicle;
and determining the offset of the mark line in the first video image relative to the reference mark line, and determining that the image acquisition equipment is offset when the offset reaches a preset condition.
By the scheme, the mark line and the reference mark line in the first video image are subjected to offset calculation, and the offset condition of the image acquisition equipment is determined according to the calculation result. Thereby reducing the workload of manual detection. Due to the existence of the reference mark line, the accuracy of offset monitoring can be improved.
In one embodiment, the target vehicle is determined by:
identifying each vehicle appearing in each frame of static image of the first video image to acquire the running track of each vehicle;
determining each first vehicle stopping in the driving process according to the driving track of each vehicle;
a first vehicle, in which no other vehicle exists within a forward predetermined range, is determined as a target vehicle.
Through the scheme, the target vehicle for determining the sign line can be automatically confirmed by utilizing the vehicle identification and tracking technology.
In one embodiment, determining each first vehicle that has come to a stop while traveling includes:
acquiring the position variation of each vehicle in each frame of static image;
the vehicle whose position variation amount is lower than the threshold value is determined as the first vehicle.
Through the scheme, each vehicle which stops in the driving process can be accurately screened out.
In one embodiment, determining a sign line in a first video image based on a stop position comprises:
counting the coordinates of the stop positions of the target vehicles;
and determining the sign line in the first video image according to the statistical result.
By the scheme, the positions of the stop lines of the intersection can be excavated by utilizing the stop positions of the preset number of target vehicles, and the stop lines are used as the mark lines in the image, so that the mark lines in the image can be automatically identified.
In one embodiment, the method further comprises:
and in the case that the offset does not reach the preset condition, the reference marking line is adjusted by using the marking line in the first video image.
Through the scheme, under the condition that the number of the mark line samples is enough, the adjustment result of the reference mark line can be close to the real condition of the mark line.
In a second aspect, the present application provides a monitoring device for an image capturing apparatus, the device comprising:
the stop position determining module is used for determining the stop position of each target vehicle from the first video image acquired by the image acquisition equipment;
the marking line determining module is used for determining a marking line in the first video image according to the stop position of each target vehicle;
and the offset determining module is used for determining the offset of the mark line in the first video image relative to the reference mark line and determining that the image acquisition equipment is offset when the offset reaches a preset condition.
In one embodiment, a stop position determination module includes:
the driving track determining submodule is used for identifying each vehicle appearing in each frame of static image of the first video image and acquiring the driving track of each vehicle;
the first vehicle determination submodule is used for determining each first vehicle which stops in the driving process according to the driving track;
and the target vehicle determination sub-module is used for determining a first vehicle without other vehicles in the forward preset range as the target vehicle.
In one embodiment, the first vehicle determination submodule is further configured to:
the position variation amount of each vehicle in each frame of the still image is acquired, and the vehicle having the position variation amount lower than the threshold value is determined as the first vehicle.
In one embodiment, a sign line determination module includes:
the coordinate counting submodule is used for counting the coordinates of the stop positions of the target vehicles;
and the marking line determining and executing submodule is used for determining the marking line in the first video image according to the statistical result.
In one embodiment, the method further comprises:
and the reference sign line adjusting module is used for adjusting the reference sign line by using the sign line in the first video image under the condition that the offset does not reach a preset condition.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method provided by any one of the embodiments of the present application.
In a fourth aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions, where the computer instructions are configured to cause a computer to perform a method provided in any one of the embodiments of the present application.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flow chart of a monitoring method of an image acquisition device according to the present application;
FIG. 2 is a flow chart of determining a target vehicle according to the present application;
FIG. 3 is a flow chart for determining a sign line in a first video image according to the present application;
FIG. 4 is a schematic view of a monitoring device of an image capture device according to the present application;
FIG. 5 is a schematic view of a stop position determination module according to the present application;
FIG. 6 is a schematic diagram of a sign line determination module according to the present application;
fig. 7 is a block diagram of an electronic device for implementing a monitoring method of an image capturing device according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, in one embodiment, there is provided a monitoring method of an image capturing apparatus, including the steps of:
s101: from the first video image captured by the image capturing device, the stop position of each target vehicle is determined.
S102: the sign line in the first video image is determined according to the stop position of each target vehicle.
S103: and determining the offset of the mark line in the first video image relative to the reference mark line, and determining that the image acquisition equipment is offset when the offset reaches a preset condition.
The steps can be realized through data processing equipment such as a server or a cloud processor. And the data processing equipment receives the information uploaded by all the image acquisition equipment arranged in the target area. According to the analysis of the uploaded information, the deviation condition of each image acquisition device can be detected.
The first video image may comprise a plurality of consecutive frames of still images. Such as consecutive frames of still images for an hour, consecutive frames of still images for a day, etc. It will be appreciated that the still image may be a plurality of non-consecutive frames.
All vehicles in each frame of static image of the first video image are identified to determine the target vehicle. The identification means may include one or more of license plate number identification, body color identification, body pattern identification, or vehicle type identification, etc.
By identifying vehicles, different vehicles may be assigned an Identification (ID) to distinguish. Among the recognized vehicles, the vehicle for determining the sign line is screened out, and this vehicle is referred to as a target vehicle in the present embodiment. The screening process may include: and determining the vehicles which are stopped during the running process according to the running tracks of the vehicles. And a vehicle stopped at the head row, i.e., a target vehicle in which no other vehicle exists in the forward direction, is screened out among the vehicles at which the stop occurs.
Generally, since the red light stops at the target vehicle in the first row of the intersection, the head of the target vehicle presses on the stop line. Based on this, a stop line can be obtained in the first video image according to the stop positions of the predetermined number of target vehicles. The stop-line may be used as a marker line in the first video image.
Further, when the travel locus of each vehicle is acquired, a lane line may be generated from the travel locus. For example, the width of the lane may be measured in advance in a video image sample. And expanding the driving track according to the width of the lane in the video image, so as to obtain the lane according to the driving track. Lane lines are respectively marked on both sides of the lane as sign lines in the first video image.
The marker line in the first video image is compared with a reference marker line to determine whether the amount of shift of the marker line in the first video image reaches a predetermined condition. The predetermined condition may be that the coincidence degree of the sign line is lower than a threshold value, or that the difference in slope is higher than a threshold value, or the like. In case a predetermined condition is reached, it may be determined that the image acquisition device is offset. Wherein the offset includes, but is not limited to, a change in position, angle.
Wherein the reference sign line may be determined in the same manner as the sign line in the first video image. For example, on the first day or the first month after the installation of the image capturing apparatus is completed, the multi-frame still image at the corresponding time may be identified, and the marker line obtained by the identification may be used as the reference marker line.
Or after the image acquisition equipment is installed, the pre-trained marking line recognition model can be adopted to recognize the marking line in the video image, and the recognition result is used as the reference marking line.
Still alternatively, a manual labeling mode may be adopted. For example, after the image capturing device is installed, a worker marks a stop line on a video image captured by the image capturing device, and the marking result is used as a reference mark line.
In order to increase the accuracy of the judgment, in the case where the amount of shift between the marker line appearing in the first video image and the reference marker line reaches a predetermined condition, the marker line may be determined in a video image such as the second video image, the third video image, or the like in the same manner as the marker line is determined in the first video image.
The marker lines in the video images such as the second video image and the third video image are respectively compared with the reference marker line, and the image acquisition equipment can be determined to be deviated under the condition that the deviation amount of the marker lines in the video images reaches a preset condition.
By the scheme, the mark line in the first video image is compared with the reference mark line, and the offset condition of the image acquisition equipment is determined according to the comparison result. Therefore, the workload of manual detection is reduced, and the accuracy of offset monitoring can be improved by setting the reference sign line.
As shown in fig. 2, in one embodiment, determining a target vehicle includes:
s201: and identifying each vehicle appearing in each frame image of the first video image, and acquiring the running track of each vehicle.
S202: and determining each first vehicle which stops in the running process according to the running track of each vehicle.
S203: a first vehicle, in which no other vehicle exists within a forward predetermined range, is determined as a target vehicle.
For the identified first vehicle, the driving track of the first vehicle can be obtained according to the position of the first vehicle in each frame of static image of the first video image. For example, the ID is detected for the first time in the N-th still image 1 The vehicle of (1), then the vehicle can contain the identification ID after the N-th frame of static image 1 In other still images of the vehicle, the identification is respectively determined as ID 1 The location of the vehicle. Each position can be abstracted into a pixel point or a pixel block, and fitting is carried out on each position to obtain an ID (identity) 1 The running track of the vehicle.
According to the driving track, the ID can be determined 1 Whether or not the vehicle is stopped during running. In the event of a parking, the identification may be an ID 1 Is determined as the first vehicle, i.e. identified as ID 1 The first vehicle of (1).
Further, it is determined that the identifier is ID 1 Is parked with the other vehicle present in the forward direction. In the absence of other vehicles, the identification may be an ID 1 Is determined to be the target vehicle. For example, a range threshold or a distance threshold may be set. Detect within a range threshold or distance threshold, identified as ID 1 Whether there are other vehicles in the forward direction of the first vehicle. If not, the ID can be determined as the identification 1 The first vehicle of (a) is parked without other vehicles in front of it.
Generally, in the case of a red light, a parking situation of the vehicle during driving occurs. Through the judgment process, the first row of vehicles stopping at the intersection can be screened out. By using the screened stop position of the target vehicle, the position of the stop line, namely the position of the mark line, can be determined in the subsequent steps.
Through the scheme, the target vehicle for determining the sign line can be automatically confirmed by utilizing the vehicle identification and tracking technology.
In one embodiment, determining each first vehicle that has stopped while traveling includes:
the position variation amount of each vehicle in each frame of the still image is acquired, and the vehicle having the position variation amount lower than the threshold value is determined as the first vehicle.
Still using the identification as ID 1 The vehicle of (1) is an example for illustration. ID is detected for the first time in the static image of the Nth frame 1 From the N +1 th frame of static image, traversing each frame of static image one by one, screening out the static images containing the ID as the identification 1 Each frame of the vehicle of (1).
Determining the ID in each frame of static image 1 The location of the vehicle. In the case of ID 1 May be determined to be identified as the ID in the case where the amount of change in the position of the vehicle in the predetermined number of still images is below the threshold value 1 The vehicle is stopped during driving. A predetermined number ofTo be 30 frames, 50 frames, etc.
For identification as ID 1 The amount of change in the position of the vehicle in the predetermined number of still images may be directly determined from the still images. For example, in a still image, for identification ID 1 The recognition result of the vehicle may be a detection box in which the identification of the vehicle is marked. The ID may be the center point of the detection frame as the identification 1 The location of the vehicle. According to the identification as ID 1 The coordinates of the center point of the detection frame of the vehicle in each frame of the static image can be identified as ID 1 The amount of change in the position of the vehicle.
In addition, the identifier may be an ID 1 The position of the vehicle in each frame of static image is converted into a world coordinate system, so that the identification is determined as ID 1 The amount of change in the position of the vehicle.
Through the scheme, the vehicles which are parked in the driving process can be accurately screened out.
As shown in fig. 3, in one embodiment, determining a sign line in a first video image based on a stop position includes:
s301: the coordinates of the stop position of each target vehicle are counted.
S302: and determining the sign line in the first video image according to the statistical result.
In the image, the recognition result of each target vehicle may be one detection frame. The forward and backward directions of the detection frame may be determined according to the traveling direction of the target vehicle. The forward direction of the detection frame may be taken as the vehicle head position of the corresponding target vehicle, and the vehicle head position as the stop position.
Take the example that the target vehicle enters from the upper edge and exits from the lower edge of the first video image. Since the target vehicles stop at the intersection side by side, the vertical coordinate difference of the stop positions of the target vehicles in the first video image is not large. Based on this, the average value of the ordinate of the stop position of each target vehicle can be counted. The ordinate may be a coordinate in the still image, for example, a pixel point at the lower left corner of each frame of the still image may be used as the origin of the coordinate. A horizontal line segment can be obtained in the video image based on the average of the ordinate. The horizontal line segment may be used as a sign line, i.e., a stop line.
With the above-described arrangement, the position of the intersection stop line can be excavated using the stop positions of the predetermined number of target vehicles. The stop line is used as the mark line in the image, so that the mark line in the image can be automatically identified.
In one embodiment, the method further comprises:
and in the case that the offset does not reach the preset condition, the reference marking line is adjusted by using the marking line in the first video image.
In the case where the amount of shift of the marker line in the first video image from the reference marker line does not reach the predetermined condition, it can be determined that the position of the image pickup device has not shifted in position. Based on this, both the reference sign line and the sign line in the first video image can be taken as sign line samples. And counting the mark line samples, and replacing the reference mark line with the statistical result to realize the adjustment of the reference mark line. For example, in the case that the sign lines are represented by pixel points, intersection pixel points of the sign lines or union pixel points and the like may be counted. Alternatively, the middle scribe line of each scribe line may be counted.
By the scheme, under the condition that the number of the mark line samples is enough, the adjustment result of the reference mark line can be close to the real condition of the mark line.
As shown in fig. 4, the present application provides a monitoring device of an image capturing apparatus, the device comprising:
a stopping position determining module 401, configured to determine a stopping position of each target vehicle from the first video image captured by the image capturing device.
A sign line determining module 402, configured to determine a sign line in the first video image according to the stop position of each target vehicle.
And an offset determining module 403, configured to determine an offset amount of the marker line in the first video image with respect to the reference marker line, and determine that the image capturing device is offset when the offset amount reaches a predetermined condition.
As shown in fig. 5, in one embodiment, the stop position determination module 401 includes:
the driving track determining submodule 4011 is configured to identify each vehicle appearing in each frame of still image of the first video image, and acquire a driving track of each vehicle.
The first vehicle determination submodule 4012 is configured to determine, according to the travel trajectory of each vehicle, each first vehicle that stops during the travel.
The target vehicle determining sub-module 4013 is configured to determine, as the target vehicle, a first vehicle in which no other vehicle exists within a forward predetermined range.
In one embodiment, the first vehicle determination sub-module 4012 is further configured to:
the position variation amount of each vehicle in each frame of the still image is acquired, and the vehicle having the position variation amount lower than the threshold value is determined as the first vehicle.
As shown in fig. 6, in one embodiment, the sign line determining module 402 includes:
the coordinate counting submodule 4021 is configured to count the coordinates of the stop position of each target vehicle.
The sign line determination sub-module 4022 is configured to determine a sign line in the first video image according to the statistical result.
In one embodiment, the monitoring device of the image capturing apparatus further comprises:
and the reference sign line adjusting module is used for adjusting the reference sign line by using the sign line in the first video image under the condition that the offset does not reach a preset condition.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 7, the electronic device is a block diagram of an electronic device of a monitoring method of an image capturing device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 710, a memory 720, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display Graphical information for a Graphical User Interface (GUI) on an external input/output device, such as a display device coupled to the Interface. In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). One processor 710 is illustrated in fig. 7.
The memory 720, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the monitoring method of the image capturing apparatus in the embodiments of the present application (for example, the stop position determining module 401, the sign line determining module 402, and the offset determining module 403 shown in fig. 4). The processor 710 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 720, namely, implements the monitoring method of the image capturing device in the above method embodiment.
The memory 720 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the monitoring method of the image pickup device, and the like. Further, memory 720 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 720 optionally includes memory located remotely from processor 710, which may be connected to the electronic devices via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device may further include: an input device 730 and an output device 740. The processor 710, the memory 720, the input device 730, and the output device 740 may be connected by a bus or other means, such as the bus connection in fig. 7.
The input device 730 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 740 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The Display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display, and a plasma Display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application Specific Integrated Circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (Cathode Ray Tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (12)
1. A method of monitoring an image capture device, comprising:
determining the stop position of each target vehicle from a first video image acquired by image acquisition equipment;
determining a sign line in the first video image according to the stop position of each target vehicle;
determining the offset of the mark line in the first video image relative to the reference mark line, and determining that the image acquisition equipment is offset when the offset reaches a preset condition.
2. The method of claim 1, wherein the target vehicle is determined in a manner comprising:
identifying each vehicle appearing in each frame of static image of the first video image, and acquiring the running track of each vehicle;
determining each first vehicle which stops in the driving process according to the driving track of each vehicle;
determining the first vehicle, in which no other vehicle exists within a forward predetermined range, as the target vehicle.
3. The method of claim 2, wherein the determining each first vehicle that has come to a stop during travel comprises:
acquiring the position variation of each vehicle in each frame of static image;
and determining the vehicle with the position variation lower than a threshold value as the first vehicle.
4. The method of claim 1, wherein determining a sign line in the first video image based on the stop position comprises:
counting the coordinates of the stop positions of the target vehicles;
and determining a sign line in the first video image according to the statistical result.
5. The method of claim 1, further comprising:
and under the condition that the offset does not reach a preset condition, the reference marking line is adjusted by using the marking line in the first video image.
6. A monitoring device of an image acquisition apparatus, comprising:
the stopping position determining module is used for determining the stopping position of each target vehicle from the first video image acquired by the image acquisition equipment;
the mark line determining module is used for determining the mark line in the first video image according to the stop position of each target vehicle;
and the offset determining module is used for determining the offset of the mark line in the first video image relative to the reference mark line, and determining that the image acquisition equipment is offset under the condition that the offset reaches a preset condition.
7. The apparatus of claim 6, wherein the stop position determination module comprises:
the driving track determining submodule is used for identifying each vehicle appearing in each frame of static image of the first video image and acquiring the driving track of each vehicle;
the first vehicle determining submodule is used for determining each first vehicle which stops in the driving process according to the driving track of each vehicle;
a target vehicle determination sub-module for determining the first vehicle, for which no other vehicle exists within a forward predetermined range, as the target vehicle.
8. The apparatus of claim 7, wherein the first vehicle determination sub-module is further to:
and acquiring the position variation of each vehicle in each frame of static image, and determining the vehicle with the position variation lower than a threshold value as a first vehicle.
9. The apparatus of claim 6, wherein the sign line determining module comprises:
the coordinate counting submodule is used for counting the coordinates of the stop positions of the target vehicles;
and the marking line determination execution submodule is used for determining the marking line in the first video image according to the statistical result.
10. The apparatus of claim 6, further comprising:
and the reference sign line adjusting module is used for adjusting the reference sign line by using the sign line in the first video image under the condition that the offset does not reach a preset condition.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; which is characterized in that the water-soluble polymer,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 5.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010413763.4A CN111540023B (en) | 2020-05-15 | 2020-05-15 | Monitoring method and device of image acquisition equipment, electronic equipment and storage medium |
JP2020198534A JP7110310B2 (en) | 2020-05-15 | 2020-11-30 | MONITORING METHOD, APPARATUS, ELECTRONIC EQUIPMENT, STORAGE MEDIUM, AND PROGRAM FOR IMAGE ACQUISITION FACILITIES |
EP20217743.2A EP3910533B1 (en) | 2020-05-15 | 2020-12-30 | Method, apparatus, electronic device, and storage medium for monitoring an image acquisition device |
US17/142,011 US11423659B2 (en) | 2020-05-15 | 2021-01-05 | Method, apparatus, electronic device, and storage medium for monitoring an image acquisition device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010413763.4A CN111540023B (en) | 2020-05-15 | 2020-05-15 | Monitoring method and device of image acquisition equipment, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111540023A CN111540023A (en) | 2020-08-14 |
CN111540023B true CN111540023B (en) | 2023-03-21 |
Family
ID=71977748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010413763.4A Active CN111540023B (en) | 2020-05-15 | 2020-05-15 | Monitoring method and device of image acquisition equipment, electronic equipment and storage medium |
Country Status (4)
Country | Link |
---|---|
US (1) | US11423659B2 (en) |
EP (1) | EP3910533B1 (en) |
JP (1) | JP7110310B2 (en) |
CN (1) | CN111540023B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744345A (en) * | 2021-08-26 | 2021-12-03 | 浙江大华技术股份有限公司 | Camera tilt detection method, device, electronic device, and storage medium |
CN114332826B (en) * | 2022-03-10 | 2022-07-08 | 浙江大华技术股份有限公司 | Vehicle image recognition method and device, electronic equipment and storage medium |
CN115842848B (en) * | 2023-03-01 | 2023-04-28 | 成都远峰科技发展有限公司 | Dynamic monitoring system based on industrial Internet of things and control method thereof |
CN117750196B (en) * | 2024-02-10 | 2024-05-28 | 苔花科迈(西安)信息技术有限公司 | Data acquisition method and device of underground drilling site mobile camera device based on template |
CN118155143B (en) * | 2024-05-11 | 2024-09-03 | 浙江深象智能科技有限公司 | Vehicle monitoring method, device, system and equipment |
CN118317078B (en) * | 2024-06-11 | 2024-08-06 | 湖南省华芯医疗器械有限公司 | Image transmission delay detection method, detection system and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006048991A1 (en) * | 2004-11-04 | 2006-05-11 | Shuhou Co., Ltd. | Offset printing method using ink-jet system and printed object by same |
CN103162632A (en) * | 2013-03-26 | 2013-06-19 | 中国水利水电科学研究院 | Three-dimensional (3D) optical displacement measuring system for centrifugal model |
CN103700277A (en) * | 2013-12-11 | 2014-04-02 | 安徽锐通信息技术有限公司 | Parking position recording system, mobile terminal and parking position recording method |
CN104574438A (en) * | 2014-12-23 | 2015-04-29 | 中国矿业大学 | Axial offset video detection method for winch main shaft |
CN104742912A (en) * | 2013-12-27 | 2015-07-01 | 比亚迪股份有限公司 | Lane deviation detection method and device |
CN105828044A (en) * | 2016-05-09 | 2016-08-03 | 深圳信息职业技术学院 | Monitoring system and monitoring method based on the monitoring system |
CN106303422A (en) * | 2016-08-12 | 2017-01-04 | 浙江宇视科技有限公司 | A kind of live video display packing and equipment |
CN106570906A (en) * | 2016-11-09 | 2017-04-19 | 东南大学 | Rectangular pattern-based method for detecting distances under camera angle deflection condition |
CN107144285A (en) * | 2017-05-08 | 2017-09-08 | 深圳地平线机器人科技有限公司 | Posture information determines method, device and movable equipment |
CN109949365A (en) * | 2019-03-01 | 2019-06-28 | 武汉光庭科技有限公司 | Vehicle designated position parking method and system based on road surface characteristic point |
CN110401583A (en) * | 2019-06-21 | 2019-11-01 | 深圳绿米联创科技有限公司 | Method, apparatus, system, mobile terminal and the storage medium of equipment replacement |
CN110516652A (en) * | 2019-08-30 | 2019-11-29 | 北京百度网讯科技有限公司 | Method, apparatus, electronic equipment and the storage medium of lane detection |
CN110533925A (en) * | 2019-09-04 | 2019-12-03 | 上海眼控科技股份有限公司 | Processing method, device, computer equipment and the storage medium of vehicle illegal video |
CN110621541A (en) * | 2018-04-18 | 2019-12-27 | 百度时代网络技术(北京)有限公司 | Map-free and location-free lane-following driving method for the automatic driving of an autonomous vehicle on a highway |
CN110909711A (en) * | 2019-12-03 | 2020-03-24 | 北京百度网讯科技有限公司 | Method, device, electronic equipment and storage medium for detecting lane line position change |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5471516B2 (en) | 2010-01-28 | 2014-04-16 | トヨタ自動車株式会社 | Deceleration support device |
JP5360076B2 (en) | 2011-01-14 | 2013-12-04 | 株式会社デンソー | Obstacle notification device |
US8970701B2 (en) * | 2011-10-21 | 2015-03-03 | Mesa Engineering, Inc. | System and method for predicting vehicle location |
KR102203410B1 (en) * | 2014-10-20 | 2021-01-18 | 삼성에스디에스 주식회사 | Method and Apparatus for Setting Region of Interest |
KR102482414B1 (en) * | 2016-06-24 | 2022-12-29 | 삼성전자 주식회사 | A key engaging apparatus and electronic device having the same |
CN109871752A (en) | 2019-01-04 | 2019-06-11 | 北京航空航天大学 | A method of lane line is extracted based on monitor video detection wagon flow |
CN110798681B (en) | 2019-11-12 | 2022-02-01 | 阿波罗智联(北京)科技有限公司 | Monitoring method and device of imaging equipment and computer equipment |
-
2020
- 2020-05-15 CN CN202010413763.4A patent/CN111540023B/en active Active
- 2020-11-30 JP JP2020198534A patent/JP7110310B2/en active Active
- 2020-12-30 EP EP20217743.2A patent/EP3910533B1/en active Active
-
2021
- 2021-01-05 US US17/142,011 patent/US11423659B2/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006048991A1 (en) * | 2004-11-04 | 2006-05-11 | Shuhou Co., Ltd. | Offset printing method using ink-jet system and printed object by same |
CN103162632A (en) * | 2013-03-26 | 2013-06-19 | 中国水利水电科学研究院 | Three-dimensional (3D) optical displacement measuring system for centrifugal model |
CN103700277A (en) * | 2013-12-11 | 2014-04-02 | 安徽锐通信息技术有限公司 | Parking position recording system, mobile terminal and parking position recording method |
CN104742912A (en) * | 2013-12-27 | 2015-07-01 | 比亚迪股份有限公司 | Lane deviation detection method and device |
CN104574438A (en) * | 2014-12-23 | 2015-04-29 | 中国矿业大学 | Axial offset video detection method for winch main shaft |
CN105828044A (en) * | 2016-05-09 | 2016-08-03 | 深圳信息职业技术学院 | Monitoring system and monitoring method based on the monitoring system |
CN106303422A (en) * | 2016-08-12 | 2017-01-04 | 浙江宇视科技有限公司 | A kind of live video display packing and equipment |
CN106570906A (en) * | 2016-11-09 | 2017-04-19 | 东南大学 | Rectangular pattern-based method for detecting distances under camera angle deflection condition |
CN107144285A (en) * | 2017-05-08 | 2017-09-08 | 深圳地平线机器人科技有限公司 | Posture information determines method, device and movable equipment |
CN110621541A (en) * | 2018-04-18 | 2019-12-27 | 百度时代网络技术(北京)有限公司 | Map-free and location-free lane-following driving method for the automatic driving of an autonomous vehicle on a highway |
CN109949365A (en) * | 2019-03-01 | 2019-06-28 | 武汉光庭科技有限公司 | Vehicle designated position parking method and system based on road surface characteristic point |
CN110401583A (en) * | 2019-06-21 | 2019-11-01 | 深圳绿米联创科技有限公司 | Method, apparatus, system, mobile terminal and the storage medium of equipment replacement |
CN110516652A (en) * | 2019-08-30 | 2019-11-29 | 北京百度网讯科技有限公司 | Method, apparatus, electronic equipment and the storage medium of lane detection |
CN110533925A (en) * | 2019-09-04 | 2019-12-03 | 上海眼控科技股份有限公司 | Processing method, device, computer equipment and the storage medium of vehicle illegal video |
CN110909711A (en) * | 2019-12-03 | 2020-03-24 | 北京百度网讯科技有限公司 | Method, device, electronic equipment and storage medium for detecting lane line position change |
Non-Patent Citations (5)
Title |
---|
刘锋 ; 丁治 ; .基于视频画面中目标尺寸测量的车型识别研究.2015,(13),第152页. * |
焦欣欣 ; 王民慧 ; 李晓鹏 ; .基于单目视觉的结构化道路检测算法研究.2011,(Z1),第8-11页. * |
王铮 ; 王明 ; 张厚钧 ; .碰撞试验中车辆横向偏移的研究.2015,(12),第98-100页. * |
符锌砂 ; 何石坚 ; 杜锦涛 ; 葛婷 ; .线形空间几何突变对曲线路段车道偏移的影响.2019,(12),第110-118页. * |
胡广胜 ; 王菁 ; 孙福庆 ; .基于图像识别的轨道交通车辆装配过程检测系统.2020,(04),第82-86页. * |
Also Published As
Publication number | Publication date |
---|---|
US11423659B2 (en) | 2022-08-23 |
CN111540023A (en) | 2020-08-14 |
EP3910533B1 (en) | 2024-04-10 |
JP7110310B2 (en) | 2022-08-01 |
US20210357660A1 (en) | 2021-11-18 |
EP3910533A1 (en) | 2021-11-17 |
JP2021179964A (en) | 2021-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111540023B (en) | Monitoring method and device of image acquisition equipment, electronic equipment and storage medium | |
CN110910665B (en) | Signal lamp control method and device and computer equipment | |
CN112053563B (en) | Event detection method and device applicable to edge computing platform and cloud control platform | |
CN111860319A (en) | Method for determining lane line, method, device and equipment for evaluating positioning accuracy | |
CN111275983B (en) | Vehicle tracking method, device, electronic equipment and computer-readable storage medium | |
CN112507949A (en) | Target tracking method and device, road side equipment and cloud control platform | |
CN111523471B (en) | Method, device, equipment and storage medium for determining lane where vehicle is located | |
CN111292531B (en) | Tracking method, device and equipment of traffic signal lamp and storage medium | |
CN112101223B (en) | Detection method, detection device, detection equipment and computer storage medium | |
CN113538911B (en) | Intersection distance detection method and device, electronic equipment and storage medium | |
CN113091757B (en) | Map generation method and device | |
CN111703371B (en) | Traffic information display method and device, electronic equipment and storage medium | |
CN111652112A (en) | Lane flow direction identification method and device, electronic equipment and storage medium | |
CN111640301B (en) | Fault vehicle detection method and fault vehicle detection system comprising road side unit | |
CN111540010B (en) | Road monitoring method and device, electronic equipment and storage medium | |
CN110968718A (en) | Target detection model negative sample mining method and device and electronic equipment | |
CN111339877B (en) | Method and device for detecting length of blind area, electronic equipment and storage medium | |
CN111666876A (en) | Method and device for detecting obstacle, electronic equipment and road side equipment | |
CN112257604A (en) | Image detection method, image detection device, electronic equipment and storage medium | |
CN111721305A (en) | Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium | |
CN110798681B (en) | Monitoring method and device of imaging equipment and computer equipment | |
CN111401248A (en) | Sky area identification method and device, electronic equipment and storage medium | |
CN113011298A (en) | Truncated object sample generation method, target detection method, road side equipment and cloud control platform | |
CN112735147B (en) | Method and device for acquiring delay index data of road intersection | |
CN110849327B (en) | Shooting blind area length determination method and device and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211012 Address after: 100176 Room 101, 1st floor, building 1, yard 7, Ruihe West 2nd Road, economic and Technological Development Zone, Daxing District, Beijing Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Applicant before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |