CN112396040B - Method and device for identifying lane occupation of vehicle - Google Patents

Method and device for identifying lane occupation of vehicle Download PDF

Info

Publication number
CN112396040B
CN112396040B CN202110065613.3A CN202110065613A CN112396040B CN 112396040 B CN112396040 B CN 112396040B CN 202110065613 A CN202110065613 A CN 202110065613A CN 112396040 B CN112396040 B CN 112396040B
Authority
CN
China
Prior art keywords
image
vehicle
fire fighting
feature
visualization system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110065613.3A
Other languages
Chinese (zh)
Other versions
CN112396040A (en
Inventor
蒋洪庆
张忠宝
江波
张武松
董照阳
戈宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sefon Software Co Ltd
Original Assignee
Chengdu Sefon Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sefon Software Co Ltd filed Critical Chengdu Sefon Software Co Ltd
Priority to CN202110065613.3A priority Critical patent/CN112396040B/en
Publication of CN112396040A publication Critical patent/CN112396040A/en
Application granted granted Critical
Publication of CN112396040B publication Critical patent/CN112396040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The invention discloses a vehicle lane occupation identification method and a vehicle lane occupation identification device, which mainly solve the problems that the existing image identification system in the prior art can not identify vehicles and roads with very small vehicle occupation ratio in a picture, can not identify a fire fighting channel and can not count the residence time of the vehicles at a certain position. According to the invention, a three-dimensional visualization system is bound with a dynamic recognition vehicle to occupy a fire fighting passage through an image recognition technology, so that a rapid real-time lane occupation alarm service is provided for community, community and street management, a management standard is standardized, and managers are informed to treat fire fighting hidden dangers in time.

Description

Method and device for identifying lane occupation of vehicle
Technical Field
The invention relates to the field of image visualization, in particular to a method and a device for identifying a lane occupied by a vehicle.
Background
With the development of urban treatment and data, the urban brain gradually replaces the traditional urban treatment and goes deep into various aspects of social treatment, data analysis, emergency solution, future development planning and the like; with the development of three-dimensional visualization, the three-dimensional digital city can reach a very ideal degree in effect, the digital city gradually changes from the improvement of the effect into better providing service for social governance, and accurate and error-free data sources are gradually concerned by managers.
The data image is gradually converted into an eye for city visualization, and the city information is transmitted to the central system; after receiving the image, the central system is a matter of great concern on how to sort and analyze the image information and apply the image information to urban management; especially, the urban image is greatly influenced by a plurality of vehicles and irregular parking behaviors in the current city, and emergency rescue is hindered.
The existing image recognition system does not have a special lane occupation recognition system, most of the existing vehicle recognition systems are image classification systems, vehicles and roads can be recognized, but all problems cannot be finely processed; in the existing system, if the proportion of the vehicle in the image is small, the vehicle cannot be identified, and the road cannot be identified when the proportion of the image is small; the fire fighting access is more difficult to identify in a picture, and meanwhile, the existing system cannot count the stay time of the vehicle at a certain position; the existing image recognition system can only capture one picture and provide the picture for a manager, and the manager searches for the problem by searching nearby the camera according to the provided camera number, so that the system is very inconvenient.
Disclosure of Invention
The invention aims to provide a vehicle lane occupation identification method and a vehicle lane occupation identification device, which are used for solving the problems that the conventional image identification system cannot identify a vehicle and a road with a small vehicle occupation ratio in a picture, cannot identify a fire fighting channel and cannot count the staying time of the vehicle at a certain position.
In order to solve the above problems, the present invention provides the following technical solutions:
the method for identifying the occupied lane of the vehicle comprises the following steps:
s1, extracting shape features, color features, character features and pattern features of the image;
s2, carrying out image recognition on the shape characteristics, the color characteristics, the character characteristics and the pattern characteristics in the step S1 through a three-dimensional visualization system, and marking a fire fighting channel in the three-dimensional visualization system;
s3, correspondingly associating the fire fighting channel of the three-dimensional visualization system in the step S2 with the fire fighting channel shot by the solid camera;
s4, judging whether a vehicle enters the fire fighting passage of the three-dimensional visualization system in the step S3, if so, counting the stay time of the vehicle, otherwise, not performing any operation.
According to the scheme, the image recognition technology and the three-dimensional visualization system are bound to dynamically recognize that the vehicle occupies a fire fighting channel, so that rapid real-time lane occupation alarm service is provided for community, community and street management, and the management standard is standardized; the problem can be quickly found under the condition that personnel check is not needed for 24 hours, the position of the problem is quickly positioned, and particularly, the fire fighting channels are occupied, if vehicles or other objects occupy the situation; the three-dimensional visualization system can be communicated with a vehicle management system, and the occupied vehicle information can be registered in the vehicle management system.
Further, before extracting the shape feature, the color feature, the character feature, and the pattern feature of the image in step S1, the image needs to be processed, and the processing procedure is as follows:
s001, acquiring images of the road and the fire fighting channel through equipment;
s002, defogging the image collected in the step S001, enhancing contrast, lossless amplification and stretching recovery;
s003, carrying out compression coding on the image processed in the step S002;
and S004, performing road segmentation, vehicle segmentation, fire fighting channel segmentation and background segmentation on the image subjected to compression coding in the step S003 to form an image to be subjected to feature extraction.
Further, in step S1, the image generation feature point descriptor to be extracted in step S004 is extracted by the SURF feature extraction algorithm.
Further, the specific process of image recognition in step S2 is as follows: and calculating whether the Euclidean distance between the two feature point descriptors is within a set threshold value, if so, identifying, and otherwise, excluding.
Further, after the vehicle stopping time is counted in step S4, it is determined whether the stopping time exceeds a preset time, if yes, an alarm is given in the three-dimensional visualization system, otherwise, no operation is performed.
Further, the three-dimensional visualization system sends the alarm event, the time and the place to a manager through APP and short message pushing.
A vehicle lane occupancy recognition apparatus includes a memory: for storing executable instructions; a processor: the vehicle lane occupation identification method is used for executing the executable instructions stored in the memory and realizing the vehicle lane occupation identification method.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention can quickly find the problem without the need of personnel check for 24 hours, quickly locate the position of the problem, remotely remind the manager in the area to pay attention to the problem, particularly remind which positions such as fire fighting passages and the like are occupied by vehicles or other objects, provide corresponding warning prompts by the system, and recognize and record license plate information and register the license plate information in the vehicle management system.
(2) According to the invention, a three-dimensional visualization system is bound with a dynamic recognition vehicle to occupy a fire fighting passage through an image recognition technology, so that a rapid real-time lane occupation alarm service is provided for community, community and street management, a management standard is standardized, and managers are informed to treat fire fighting hidden dangers in time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts, wherein:
fig. 1 is a block flow diagram of example 1.
Fig. 2 is a block flow diagram of embodiment 2.
Fig. 3 is a block flow diagram of embodiment 3.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to fig. 1 to 3, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, a method for identifying a lane covered by a vehicle includes the steps of:
1. image processing: the method mainly aims to remove interference and noise, and programs an original image to be suitable for a computer to carry out a feature extraction mode, and mainly comprises image acquisition, image enhancement, image restoration, image compression coding and image segmentation; the method comprises the following specific steps:
1) image acquisition: a primary means of image acquisition digital image data extraction; the digital image is an image obtained by sampling and digitalizing by means of a digital video camera, a scanner, a digital camera and other equipment, also comprises some dynamic images, and can be converted into the digital image; in the embodiment, the method mainly carries out figure acquisition on roads and fire fighting passageways, and acquires vehicles of specific models for supplementing the existing vehicle feature library;
2) and image enhancement: the quality of the image in the processes of imaging, collecting, transmitting, copying and the like can be degraded to a certain extent, and the visual effect of the digitized image is poor; in order to highlight the required part in the image and make the main structure of the image more definite, the image must be improved, namely, the image is enhanced; in the embodiment, the image is mainly subjected to defogging, contrast enhancement, lossless amplification, stretching recovery and other treatment;
3) and image restoration: the image restoration is also called that the image is blurred due to the influence of environmental noise, image blurring caused by movement, light intensity and other reasons when the image is acquired, and the image needs to be restored in order to extract a clearer image, wherein the image restoration mainly adopts a filtering method to restore an original image from a degraded image; another special technique for image reconstruction is image reconstruction, which builds an image from a set of projection data of a cross-section of an object; the embodiment adopts a filtering mode.
4) And image compression and encoding: the digital image has the obvious characteristics of large data volume and occupying quite large storage space; but the network bandwidth and the mass memory based on the computer can not process, store and transmit the data image; in order to transmit an image or video in a network environment quickly and conveniently, the image must be encoded and compressed. At present, image compression and encoding forms an international standard, such as the relatively well-known still image compression standard JPEG, which mainly aims at the resolution, color image and gray image of an image and is suitable for aspects of digital photos, color photos and the like transmitted through a network. The image coding compression technology can reduce the redundant data volume and the memory capacity of the image, improve the image transmission speed and shorten the processing time; the image compression coding is performed by adopting the existing method, and the process is not described in detail here.
5) And image segmentation technology: the method is characterized in that an image is divided into a plurality of sub-areas which are not overlapped and have respective characteristics, each area is a continuous set of pixels, and the characteristics can be the color, the shape, the gray scale, the texture and the like of the image; the image segmentation represents the image as a physically meaningful set of connected regions according to the prior knowledge of the target and the background, namely, the target and the background in the image are marked and positioned, and then the target is separated from the background; in the embodiment, the image is subjected to road segmentation, vehicle segmentation, fire fighting access segmentation, background segmentation and the like;
2. feature extraction: it is divided into shape characteristic, color characteristic, character characteristic, pattern characteristic; the shape characteristics are mainly that the image can obtain edges and regions through operations such as edge extraction, image segmentation and the like, the shape of a target is obtained, and the shape characteristics of any object can be described by the statistical attributes (such as projection) and the topological structure (such as communication and Euler) of the geometric attributes (such as length, area, distance, concave-convex and the like); this embodiment uses the SURF feature extraction algorithm, the principle of which is as follows:
1) and detecting an extreme value of the SURF feature detection step scale space: searching images on all scale spaces, and identifying potential interest points with unchanged scales and choices through Hessian; filtering the characteristic points and accurately positioning; this step is used to build the matrix in the next step.
And (4) characteristic direction assignment: and (5) counting Haar wavelet characteristics in the circular neighborhood of the characteristic points. That is, within the 60-degree sector, the 60-degree sector area is counted by rotating 0.2 radians each time, and the direction of the sector with the largest value is taken as the main direction of the feature point.
Description of characteristic points: taking 4 × 44 × 4 small rectangular regions along the neighborhood around the main direction of the feature point, counting the Haar features of each small region, then obtaining a 4-dimensional feature vector for each region, and using a feature point with 64-dimensional feature vectors as a descriptor of SURF features.
2) Construction of the Black matrix Hessian
The purpose of constructing the Hessian matrix is to generate stable edge points or mutation points of the image, similar to the action of Canny and Laplace edge detection, and prepare for feature extraction; the process of constructing the Hessian matrix corresponds to a DoG process in an SIFT algorithm, the blackplug matrix is a square matrix formed by second-order partial derivatives of a multivariate function and describes the local curvature of the function, and for an image I (x, y), Gaussian filtering needs to be carried out on the image before the Hessian matrix is constructed; this step is used to derive an incremental image feature value.
3) Gaussian pyramid with structure
Compared with the Gaussian pyramid construction process of the sift algorithm, the speed of the sift algorithm is improved. In the sift algorithm, the image size of each group (octave) is different, and the next group is a down-sample (1/4 size) of the previous group of images; in the several images within each group, their sizes are the same, except that they use a different scale σ. Also in the process of blurring, their gaussian template size is always constant, only the scale σ changes. For the surf algorithm, the size of the image is always constant, only the size of the gaussian blur template is changed, and of course, the scale σ is also changed.
4) Determining the main direction of the characteristic point
In order to ensure rotation invariance, in SURF, not the gradient histogram is counted, but the Haar wavelet feature in the feature point field is counted, that is, the sum of Haar wavelet responses of all points in a 60-degree sector in x (horizontal) and y (vertical) directions is counted (the Haar wavelet side length takes 4S) in a neighborhood with a radius of 6S (S is the scale value of the feature point) by taking the feature point as the center, gaussian weight coefficients are given to the response values, so that the response contribution close to the feature point is large, the response contribution far away from the feature point is small, then the responses in the 60-degree range are added to form a new vector, the whole circular region is traversed, and the direction of the longest vector is selected as the main direction of the feature point. Therefore, the main direction of each feature point is obtained by calculating the feature points one by one, and the main direction is used for carrying out feature comparison and generating feature description in the next step.
5) Generating the feature point descriptor
In Sift, 44 area blocks around the feature point are taken, 8 gradient directions in each small block are counted, and vectors with 448=128 dimensions are used as descriptors of Sift features. In the Surf algorithm, a rectangular region block of 44 is also taken around the feature point, but the direction of the rectangular region taken is along the main direction of the feature point. Each subregion counts haar wavelet features of 25 pixels in both the horizontal and vertical directions, where both the horizontal and vertical directions are relative to the principal direction. The haar wavelet features are 4 directions of the sum of the horizontal direction value, the vertical direction value, the horizontal direction absolute value and the vertical direction absolute value.
3. Image recognition: and calculating whether the Euclidean distance between the two feature point descriptors is within a set threshold value, if so, identifying, and otherwise, excluding.
4. Point location marking: the image recognition is completed in the three-dimensional visualization system, the fire fighting channels are marked in the three-dimensional visualization system, and the fire fighting channels recognized by each on-site camera are associated with the fire fighting channels in the three-dimensional visualization system, so that when a vehicle stops at the corresponding fire fighting channels, the position where the occupied road can be quickly positioned can be prompted in the corresponding visualization system.
5. And (3) time statistics: when the vehicle is identified by the three-dimensional visualization system after entering the fire fighting channel, the three-dimensional visualization system can count the stay time of the vehicle in the fire fighting channel at the background, and if the stay time exceeds the allowed time period, the three-dimensional visualization system can give an alarm in the visualization system.
Example 2
As shown in fig. 2, the present embodiment is different from embodiment 1 in that the image is an image directly usable for feature extraction, and does not need to be subjected to image processing, and only includes steps 2 to 5 in embodiment 1.
Example 3
As shown in fig. 3, the present embodiment is different from embodiment 1 in that the image can be directly used as an image for image recognition, and does not need image processing and feature extraction, and only includes steps 3 to 5 in embodiment 1.
Example 4
As shown in fig. 1, the present embodiment further provides, based on embodiment 1, a person binding: the three-dimensional visualization system is communicated with the administrator system, and after the alarm is detected, the alarm message is pushed to the administrator management app at the first time, the alarm event, the time and the place are sent to the administrator, and meanwhile, the message is sent by using a short message pushing function.
Example 5
The present embodiment is further directed to embodiment 1, in which a vehicle lane-occupancy identifying device includes: for storing executable instructions; a processor: the vehicle lane occupation identification method is used for executing the executable instructions stored in the memory and realizing the vehicle lane occupation identification method.
According to the invention, a three-dimensional visualization system is bound with a dynamic recognition vehicle to occupy a fire fighting passage through an image recognition technology, so that a rapid real-time lane occupation alarm service is provided for community, community and street management, a management standard is standardized, and managers are informed to treat fire fighting hidden dangers in time.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A method for identifying a lane occupied by a vehicle is characterized by comprising the following steps:
s1, extracting shape features, color features, character features and pattern features of the image;
s2, carrying out image recognition on the shape characteristics, the color characteristics, the character characteristics and the pattern characteristics in the step S1 through a three-dimensional visualization system, and marking a fire fighting channel in the three-dimensional visualization system;
s3, correspondingly associating the fire fighting channel of the three-dimensional visualization system in the step S2 with the fire fighting channel shot by the solid camera;
s4, judging whether a vehicle enters the fire fighting passage of the three-dimensional visualization system in the step S3, if so, counting the stay time of the vehicle, otherwise, not performing any operation.
2. The method for identifying a vehicle lane according to claim 1, wherein the image is processed before the shape feature, the color feature, the character feature and the pattern feature of the image are extracted in step S1, and the processing procedure is as follows:
s001, acquiring images of the road and the fire fighting channel through equipment;
s002, defogging the image collected in the step S001, enhancing contrast, lossless amplification and stretching recovery;
s003, carrying out compression coding on the image processed in the step S002;
and S004, performing road segmentation, vehicle segmentation, fire fighting channel segmentation and background segmentation on the image subjected to compression coding in the step S003 to form an image to be subjected to feature extraction.
3. The method for identifying the occupied vehicle lane according to claim 2, wherein in step S1, the image to be extracted in step S004 is extracted by a SURF feature extraction algorithm to generate the feature point descriptors.
4. The method for identifying a covered lane of a vehicle according to claim 1, wherein the specific process of image identification in step S2 is as follows: and calculating whether the Euclidean distance between the two feature point descriptors is within a set threshold value, if so, identifying, and otherwise, excluding.
5. The method as claimed in claim 1, wherein after the vehicle stopping time is counted in step S4, it is determined whether the stopping time exceeds a preset time, if yes, an alarm is given in the three-dimensional visualization system, otherwise, no operation is performed.
6. The vehicle lane occupancy recognition method of claim 5, wherein the three-dimensional visualization system sends the alarm event, the time and the place to the manager through APP and short message push.
7. A vehicle lane occupation recognition device is characterized by comprising
A memory: for storing executable instructions;
a processor: for executing executable instructions stored in said memory, implementing a vehicle lane occupancy identification method as claimed in any one of claims 1-6.
CN202110065613.3A 2021-01-19 2021-01-19 Method and device for identifying lane occupation of vehicle Active CN112396040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110065613.3A CN112396040B (en) 2021-01-19 2021-01-19 Method and device for identifying lane occupation of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110065613.3A CN112396040B (en) 2021-01-19 2021-01-19 Method and device for identifying lane occupation of vehicle

Publications (2)

Publication Number Publication Date
CN112396040A CN112396040A (en) 2021-02-23
CN112396040B true CN112396040B (en) 2022-03-01

Family

ID=74625356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110065613.3A Active CN112396040B (en) 2021-01-19 2021-01-19 Method and device for identifying lane occupation of vehicle

Country Status (1)

Country Link
CN (1) CN112396040B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240222A (en) * 2013-06-19 2014-12-24 贺江涛 Intelligent detecting method and device for firefighting access blockage
CN105163014A (en) * 2015-09-15 2015-12-16 上海图甲信息科技有限公司 Road monitoring device and method
CN105314122A (en) * 2015-12-01 2016-02-10 浙江宇视科技有限公司 Unmanned aerial vehicle for emergency commanding and lane occupation evidence taking
CN106295636A (en) * 2016-07-21 2017-01-04 重庆大学 Passageway for fire apparatus based on multiple features fusion cascade classifier vehicle checking method
CN107274675A (en) * 2017-07-12 2017-10-20 济南浪潮高新科技投资发展有限公司 Monitoring system and method are unified in a kind of region passageway for fire apparatus
CN108197526A (en) * 2017-11-23 2018-06-22 西安艾润物联网技术服务有限责任公司 Detection method, system and computer readable storage medium
CN109657532A (en) * 2018-10-25 2019-04-19 安徽新浩信息科技有限公司 A kind of passageway for fire apparatus obstruction image-recognizing method based on artificial intelligence
CN109685899A (en) * 2018-12-25 2019-04-26 成都四方伟业软件股份有限公司 Three-dimensional visualization marks management system, method and computer storage medium
CN110659606A (en) * 2019-09-23 2020-01-07 重庆商勤科技有限公司 Fire fighting access occupation identification method and device, computer equipment and storage medium
CN110766915A (en) * 2019-09-19 2020-02-07 重庆特斯联智慧科技股份有限公司 Alarm method and system for identifying fire fighting access state
CN112001963A (en) * 2020-07-31 2020-11-27 浙江大华技术股份有限公司 Fire fighting channel investigation method, system and computer equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060118636A1 (en) * 2004-12-07 2006-06-08 Planready, Inc. System and method for coordinating movement of personnel
CN103761699A (en) * 2014-01-15 2014-04-30 湖南嘉雄科技有限公司 Embedded intelligent firefighting management system
CN106571027A (en) * 2015-10-09 2017-04-19 北京文安智能技术股份有限公司 Method, device and system for monitoring illegally parked dense vehicles
CN105702048B (en) * 2016-03-23 2018-09-11 武汉理工大学 Highway front truck illegal road occupation identifying system based on automobile data recorder and method
CN109241896B (en) * 2018-08-28 2022-08-23 腾讯数码(天津)有限公司 Channel safety detection method and device and electronic equipment
CN111444845B (en) * 2020-03-26 2023-05-12 江苏集萃华科智能装备科技有限公司 Non-motor vehicle illegal stop recognition method, device and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240222A (en) * 2013-06-19 2014-12-24 贺江涛 Intelligent detecting method and device for firefighting access blockage
CN105163014A (en) * 2015-09-15 2015-12-16 上海图甲信息科技有限公司 Road monitoring device and method
CN105314122A (en) * 2015-12-01 2016-02-10 浙江宇视科技有限公司 Unmanned aerial vehicle for emergency commanding and lane occupation evidence taking
CN106295636A (en) * 2016-07-21 2017-01-04 重庆大学 Passageway for fire apparatus based on multiple features fusion cascade classifier vehicle checking method
CN107274675A (en) * 2017-07-12 2017-10-20 济南浪潮高新科技投资发展有限公司 Monitoring system and method are unified in a kind of region passageway for fire apparatus
CN108197526A (en) * 2017-11-23 2018-06-22 西安艾润物联网技术服务有限责任公司 Detection method, system and computer readable storage medium
CN109657532A (en) * 2018-10-25 2019-04-19 安徽新浩信息科技有限公司 A kind of passageway for fire apparatus obstruction image-recognizing method based on artificial intelligence
CN109685899A (en) * 2018-12-25 2019-04-26 成都四方伟业软件股份有限公司 Three-dimensional visualization marks management system, method and computer storage medium
CN110766915A (en) * 2019-09-19 2020-02-07 重庆特斯联智慧科技股份有限公司 Alarm method and system for identifying fire fighting access state
CN110659606A (en) * 2019-09-23 2020-01-07 重庆商勤科技有限公司 Fire fighting access occupation identification method and device, computer equipment and storage medium
CN112001963A (en) * 2020-07-31 2020-11-27 浙江大华技术股份有限公司 Fire fighting channel investigation method, system and computer equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
三维可视化消防管理平台;天津国信浩天三维科技有限公司;《http://www.howsky3d.com/Product/detail/item/16.html》;20120911;第1-2页 *
基于BIM的绿色建筑后勤运营管理;王士军 等;《绿色建筑》;20190320;第66-68页 *
无人机在智慧城市中的应用;武汉天宝耐特科技有限公司;《http://tianbaonet.com/xyxw/5829.jhtml》;20210108;第1、4-5页,图8,图14-15 *

Also Published As

Publication number Publication date
CN112396040A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
Baran et al. A smart camera for the surveillance of vehicles in intelligent transportation systems
CN110853033B (en) Video detection method and device based on inter-frame similarity
CN109740424A (en) Traffic violations recognition methods and Related product
Saha et al. License Plate localization from vehicle images: An edge based multi-stage approach
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN105095905A (en) Target recognition method and target recognition device
KR20140090777A (en) Method for detecting and recogniting object using local binary patterns and apparatus thereof
Hu et al. Automatic recognition of cloud images by using visual saliency features
CN110826429A (en) Scenic spot video-based method and system for automatically monitoring travel emergency
KR100983777B1 (en) Image capture system for object recognitions and method for controlling the same
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
Azad et al. New method for optimization of license plate recognition system with use of edge detection and connected component
CN109961425A (en) A kind of water quality recognition methods of Dynamic Water
CN110121109A (en) Towards the real-time source tracing method of monitoring system digital video, city video monitoring system
CN114332513A (en) New energy automobile abnormal parking amplification data detection method for smart city
KR102222109B1 (en) Integrated management system of parking enforcement image through deep learning
CN111753642B (en) Method and device for determining key frame
CN106778765B (en) License plate recognition method and device
CN112396040B (en) Method and device for identifying lane occupation of vehicle
CN116503820A (en) Road vehicle type based detection method and detection equipment
Abdulhussein et al. Computer vision to improve security surveillance through the identification of digital patterns
CN113936300A (en) Construction site personnel identification method, readable storage medium and electronic device
CN108399411B (en) A kind of multi-cam recognition methods and device
Kodwani Automatic Vehicle Detection, Tracking and Recognition of License Plate in Real Time Videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant