CN111194450A - Panoramic monitoring system - Google Patents

Panoramic monitoring system Download PDF

Info

Publication number
CN111194450A
CN111194450A CN201880057385.1A CN201880057385A CN111194450A CN 111194450 A CN111194450 A CN 111194450A CN 201880057385 A CN201880057385 A CN 201880057385A CN 111194450 A CN111194450 A CN 111194450A
Authority
CN
China
Prior art keywords
image
pixel movement
vehicle
information
abnormal pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880057385.1A
Other languages
Chinese (zh)
Inventor
李定俊
吴忠哉
金时郁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chemtronics Co ltd
Original Assignee
Chemtronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chemtronics Co ltd filed Critical Chemtronics Co ltd
Publication of CN111194450A publication Critical patent/CN111194450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a panoramic monitoring system. Specifically, the panoramic monitoring system of the present invention includes: a plurality of cameras installed at front, rear, left, and right sides of a vehicle, respectively, for photographing a surrounding image of the vehicle; a first blind area detection module that discriminates whether there is an object in a left blind area of the vehicle using left images among surrounding images of the vehicle captured by the plurality of cameras, respectively; and a second blind area detection module that discriminates whether there is an object in a right blind area of the vehicle using right images among surrounding images of the vehicle captured by the plurality of cameras, respectively.

Description

Panoramic monitoring system
Technical Field
The present invention relates to panoramic surveillance systems.
Background
Nowadays, vehicle collision accidents are increasing with the wide spread of vehicles. Particularly, when the vehicle is driven at a low speed like parking, the surrounding of the vehicle, which is changed many times, cannot be recognized completely, and thus the number of collision accidents increases.
To improve the situation, a panoramic monitoring system (around View monitoring system) has been developed, which is used to convert the situation around the vehicle into an image seen from above (i.e., Top View) and to present it to the driver.
In addition, it is difficult for the driver to confirm the vehicle in the blind area through the reflector, and therefore before changing lanes, the driver needs to carefully check whether the vehicle is in the blind area.
However, in the case of high-speed driving, it is difficult for the driver to keep watching forward and observe left and right traffic conditions.
In order to solve the above problem, recently, high-priced cars are beginning to install a Blind Spot detection system (also referred to as an intelligent rear lateral side warning system).
The blind area warning system senses whether an object exists or not in real time through radars (namely sensors) arranged on the left side and the right side behind a vehicle, and displays a sensing result on a reflector, so that a driver can easily know whether a vehicle exists in a blind area or not only by looking at the reflector.
However, due to the cost of radar, most motorists drive vehicles that do not have blind spot warning systems installed, and therefore systems that address the above problems are needed.
Further, the blind area warning system learns in advance about the presence of a vehicle in a blind area through pattern recognition and applies learning data to a blind area recognition operation in real time, which is problematic in that it is necessary to secure many image (video) data for a long time to create a database in order to construct a corresponding function and to perform learning based on the created database, thereby extracting the learning data.
In addition, since the blind area warning system repeatedly searches a plurality of areas of an image to extract pattern data in order to recognize a vehicle in a blind area, and compares the extracted pattern data with learning data, a high-performance processor (for example, CPU) is required in order to ensure real-time processing of the extracted pattern data.
Furthermore, in order to use the blind area warning system, the dedicated camera is mounted on the mirror reflector toward the rear lateral side (i.e., the rear left and right sides). Therefore, the dedicated camera is installed under or at a side of the existing mirror installation position in a protruding manner, resulting in a reduction in aesthetic appearance.
Disclosure of Invention
Technical problem
The invention aims to provide a panoramic monitoring system which is used for carrying out blind area identification operation in real time without other learning processes.
Technical scheme
In an embodiment of the present invention to solve the technical problem, a panoramic monitoring system includes: a plurality of cameras installed at front, rear, left, and right sides of the vehicle, respectively, for photographing a surrounding image of the vehicle; a first blind area detection module that discriminates whether there is an object in a left blind area of a vehicle using left images among surrounding images of the vehicle respectively captured by a plurality of cameras; and a second blind area detection module that discriminates whether there is an object in a right blind area of the vehicle using right images among surrounding images of the vehicle respectively captured by the plurality of cameras.
The first blind spot detection module includes: a left image selection unit that selects a left image from the surrounding images of the vehicle captured by the plurality of cameras; a left image mask processing unit for specifying a region behind the left image selected by the left image selecting unit; a left image pixel movement information generating unit that generates pixel movement information of a region behind the left image specified by the left image mask processing unit; a left image abnormal pixel movement detecting section for detecting abnormal pixel movement from the pixel movement information generated by the left image pixel movement information generating section; the left image object recognition unit analyzes the abnormal pixel movement detected by the left image abnormal pixel movement detection unit to determine whether the abnormal pixel movement is an object.
The left-side image mask processing unit specifies a rear area of the left-side image selected by the left-side image selection unit using a mask of the rear left-side area, and the left-side image pixel movement information generation unit generates block information for the rear area of the left-side image specified by the left-side image mask processing unit, selects a pixel of interest based on the generated block information, searches for surrounding pixels related to the selected pixel of interest, and generates pixel movement information based on the search result.
The left-image abnormal pixel movement detecting section compares the pixel movement information generated by the left-image pixel movement information generating section with normal pixel movement information generated from a left-image of the vehicle when the vehicle is traveling normally, and detects abnormal pixel movement based on the comparison result, and the left-image object recognizing section analyzes the abnormal pixel movement detected by the left-image abnormal pixel movement detecting section to grasp a pixel distribution in the abnormal pixel movement, generates size and position information of the abnormal pixel movement based on the grasped pixel distribution, and discriminates whether the abnormal pixel movement is an object based on the generated size and position information of the abnormal pixel movement.
The second blind spot detection module includes: a right image selection unit that selects a right image from the surrounding images of the vehicle captured by the plurality of cameras; a right image mask processing unit for specifying a region behind the right image selected by the right image selecting unit; a right image pixel movement information generating unit that generates pixel movement information on a region behind the right image specified by the right image mask processing unit; a right image abnormal pixel movement detecting section for detecting abnormal pixel movement from the pixel movement information generated by the right image pixel movement information generating section; the right image object recognition unit analyzes the abnormal pixel movement detected by the right image abnormal pixel movement detection unit to determine whether the abnormal pixel movement is an object.
The right image mask processing unit specifies a region behind the right image selected by the right image selection unit using a mask of the right region behind the right image, and the right image pixel movement information generation unit generates block information for the region behind the right image specified by the right image mask processing unit, selects a pixel of interest based on the generated block information, searches for surrounding pixels related to the selected pixel of interest, and generates pixel movement information based on the search result.
The right image abnormal pixel movement detecting section compares the pixel movement information generated by the right image pixel movement information generating section with normal pixel movement information generated from a right image of the vehicle when the vehicle is traveling normally, and detects abnormal pixel movement based on the comparison result, and the right image object identifying section analyzes the abnormal pixel movement detected by the right image abnormal pixel movement detecting section to grasp a pixel distribution in the abnormal pixel movement, generates size and position information of the abnormal pixel movement based on the grasped pixel distribution, and discriminates whether the abnormal pixel movement is an object based on the generated size and position information of the abnormal pixel movement.
The panoramic monitoring system further comprises: a first warning identification module that generates left warning information based on the information on the presence or absence of the object discriminated by the first blind area detection module; and a second warning identification module which generates right warning information based on the information about the presence or absence of the object discriminated by the second blind area detection module.
The first warning identification module outputs left warning information to a driver under the condition that an object exists in a left blind area of the vehicle, and the second warning identification module outputs right warning information to the driver under the condition that the object exists in a right blind area of the vehicle.
The plurality of cameras includes: a first camera installed at a front end of a vehicle for photographing a front image of the vehicle; a second camera installed at a left side of the vehicle for photographing a left side image of the vehicle; a third camera installed at a right side of the vehicle for photographing a right side image of the vehicle; and a fourth camera installed at a rear end of the vehicle for photographing a rear image of the vehicle.
Effects of the invention
According to the invention, blind area identification operation can be carried out in real time without other learning processes. In addition, since no other learning process is required, a high-performance processor is not installed. In addition, unlike the blind spot warning system, the camera is not installed in a protruding manner under or to the side of the mirror mounting position, so that the deterioration of the beauty can be prevented.
Drawings
FIG. 1 is a block diagram showing a panoramic surveillance system illustrating an embodiment of the present invention.
Fig. 2 is a diagram showing an explanation of a blind area with respect to a vehicle.
FIG. 3 is a block diagram illustrating a first blind spot detection module with respect to FIG. 1.
FIG. 4 is a block diagram illustrating a second blind spot detection module with respect to FIG. 1.
Fig. 5 is a schematic diagram illustrating a blind spot warning situation with respect to the panoramic monitoring system of fig. 1.
Detailed Description
In order that the invention may be more readily understood, it is defined herein below by specific terms. Unless explicitly specified otherwise herein, scientific or technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Furthermore, unless otherwise specified herein, terms in the singular include their plural, and terms in the plural also include their singular.
Hereinafter, a panoramic monitoring system according to an embodiment of the present invention is described in detail with reference to fig. 1 to 5.
FIG. 1 is a block diagram showing a panoramic surveillance system illustrating an embodiment of the present invention. Fig. 2 is a diagram showing an explanation of a blind area with respect to a vehicle. FIG. 3 is a block diagram illustrating a first blind spot detection module with respect to FIG. 1. FIG. 4 is a block diagram illustrating a second blind spot detection module with respect to FIG. 1. Fig. 5 is a schematic diagram illustrating a blind spot warning situation with respect to the panoramic monitoring system of fig. 1.
Referring first to fig. 1, a panoramic monitoring system 1 according to an embodiment of the present invention may be installed in a vehicle, and includes a plurality of cameras 100, a first blind spot detection module 200, a second blind spot detection module 300, a first warning identification module 260, and a second warning identification module 360.
For reference, as shown in fig. 1, although the first and second blind spot detection modules 200 and 300 exist, respectively, the first and second warning identification modules 260 and 360 also exist, respectively, but are not limited thereto. That is, the first blind spot detection module 200 and the second blind spot detection module 300 may exist in the form of one combined blind spot detection module, and the first warning identification module 260 and the second warning identification module 360 may also exist in the form of one combined blind spot detection module.
However, for convenience of description, in the embodiment of the present invention, the first and second blind spot detection modules 200 and 300 and the first and second warning identification modules 260 and 360 are respectively present as an example for description.
The plurality of cameras 100 may be installed at front, rear, left, and right sides of the vehicle, respectively, to capture a surrounding image of the vehicle.
Specifically, the plurality of cameras 100 may include a first camera 100a installed at a front end of the vehicle and capturing a front image of the vehicle, a second camera 100b installed at a left side of the vehicle and capturing a left image of the vehicle, a third camera 100c installed at a right side of the vehicle and capturing a right image of the vehicle, and a fourth camera 100d installed at a rear end of the vehicle and capturing a rear image of the vehicle.
Unlike the conventional blind spot warning system, the cameras (e.g., 100b, 100c) of the panoramic monitoring system 1 are not installed in a protruding manner under or at the side of the mirror mounting position, and thus a reduction in the aesthetic appearance can be prevented.
Further, the surrounding images of the vehicle respectively captured by the plurality of cameras 100 may be provided to the first and second blind area detection modules 200 and 300.
For reference, a wide-angle lens may be used in the plurality of cameras 100, but is not limited thereto.
Fig. 2 illustrates that the driver can ensure the field of vision LSM, RSM by means of a mirror placed in the vehicle C.
Objects (e.g., other vehicles) in which the left view LSM and the right view RSM exist can be confirmed by the mirror when the driver travels.
However, the rear left area (LBS; i.e., left blind area) and the rear right area (RBS; i.e., right blind area) of the vehicle C other than the areas belonging to the visual fields LSM, RSM may be blind areas where it is difficult for the driver to recognize the presence or absence of an object (C'; e.g., other vehicle) through the mirror.
Referring to fig. 3, there is illustrated a first blind zone detection module 200 for discriminating whether an object exists within a left blind zone (LBS of fig. 2).
The first blind area detection module 200 discriminates whether an object exists in a left blind area of the vehicle by using left images among surrounding images of the vehicle respectively captured by the plurality of cameras 100.
Specifically, the first blind spot detection module 200 includes a left image selection unit 210, a left image mask processing unit 220, a left image pixel movement information generation unit 230, a left image abnormal pixel movement detection unit 240, and a left image object recognition unit 250.
The left image selecting unit 210 may select a left image from among the surrounding images of the vehicle captured by the plurality of cameras 100, respectively. Further, the left image selecting section 210 supplies the selected left image to the left image mask processing section 220.
The left image mask processing section 220 may specify a rear region of the left image selected by the left image selecting section 210.
Specifically, the left image mask processing unit 220 specifies the rear area of the left image selected by the left image selecting unit 210 by using a mask (not shown) for the rear left area. That is, the left image mask processing unit 220 eliminates unnecessary regions in the left image by masking the rear left region, and specifies only a part of the rear region.
Here, the mask of the rear left region may be algorithmically used in the left image mask processing unit 220 and generated together in the process of generating the panoramic image integrated information.
For reference, although not shown, the panoramic monitoring system 1 further includes a panoramic image generation section (not shown) that generates panoramic image integrated information based on surrounding images of the vehicle captured by the plurality of cameras 100, respectively, and generates a panoramic image based on the generated panoramic image integrated information.
Here, the panoramic image integrated information may include information for stitching (stitching) images respectively captured by the plurality of cameras 100 in accordance with tolerance parameter information, and the tolerance parameter information includes parameter information that corrects a tolerance between the cameras due to a difference in mounting position and angle difference of the plurality of cameras 100.
Therefore, since the distance between each pixel of the image captured during the generation of the panoramic image integrated information and the vehicle can be known by the panoramic image generator, the mask of the rear left area can be generated in the panoramic image generator together with the panoramic image integrated information.
That is, the left image mask processing unit 220 may specify the rear area of the left image by using the mask of the rear left area provided by the panoramic image generation unit.
The left image pixel movement information generating unit 230 may generate the pixel movement information on the rear area of the left image specified by the left image mask processing unit 220.
Specifically, the left-side image pixel movement information generating unit 230 generates patch information for a rear area of the left-side image specified by the left-side image mask processing unit 220, selects a pixel of interest based on the generated patch information, searches for surrounding pixels related to the selected pixel of interest, and generates pixel movement information based on the search result.
For reference, the tile information may include, for example, information on each divided area when an image photographed by the camera is divided into a plurality of areas or information on a standard for division into a plurality of areas. Therefore, the left image pixel movement information generation unit 230 divides the rear area of the left image into a plurality of blocks, and selects the attention pixel in each of the divided blocks.
The pixel of interest means a pixel having a change in size, hue, movement distance, or the like of a predetermined standard or more when a previous frame is compared with a current frame. In addition, the search for surrounding pixels means that the search is performed for the position where the selected attention pixel is located in the previous frame. Therefore, if the target block is not selected to be the pixel of interest, the surrounding pixel search operation is not performed.
That is, the left image pixel movement information generating unit 230 can generate pixel movement information on the rear area of the left image by sequentially performing the tile information generating operation, the attention pixel selecting operation, and the surrounding pixel searching operation.
For reference, the pixel movement information may include information of how much a particular pixel (e.g., a pixel of interest) moves in which direction when a previous frame is compared with a current frame.
The left-image abnormal pixel movement detecting unit 240 may detect abnormal pixel movement in the pixel movement information generated by the left-image pixel movement information generating unit 230.
Specifically, the left-image abnormal pixel movement detecting section 240 compares the pixel movement information generated by the left-image pixel movement information generating section 230 with the normal pixel movement information generated by the left-image of the vehicle at the time of normal running of the vehicle, and can detect abnormal pixel movement (i.e., a portion where an error occurs) based on the comparison result.
Here, the left image abnormal pixel movement detecting part 240 may include normal pixel movement information generated from the left image of the vehicle when the vehicle normally travels, and the normal pixel movement information may be previously stored in the left image abnormal pixel movement detecting part 240 at a manufacturing stage of the panoramic monitoring system 1.
For reference, the left-image abnormal pixel movement detecting part 240 compares the pixel movement information generated by the left-image pixel movement information generating part 230 with the pixel movement information generated together during the generation of the panorama image integrated information, thereby detecting abnormal pixel movement.
Of course, the left-image abnormal pixel movement detecting unit 240 detects abnormal pixel movement by comparing the pixel movement information generated by the left-image pixel movement information generating unit 230 with both the normal pixel movement information and the pixel movement information generated together during the generation of the panoramic image integrated information.
However, for convenience of explanation, in the embodiment of the present invention, the left-image abnormal pixel movement detecting unit 240 compares the pixel movement information generated by the left-image pixel movement information generating unit 230 with the normal pixel movement information.
The left image object recognition section 250 analyzes the abnormal pixel movement detected by the left image abnormal pixel movement detection section 240 to determine whether the abnormal pixel movement is an object.
Specifically, the left-image object identifying section 250 analyzes the abnormal pixel movement detected by the left-image abnormal pixel movement detecting section 240 to grasp the pixel distribution in the abnormal pixel movement, generates the magnitude and position information of the abnormal pixel movement based on the grasped pixel distribution, and discriminates whether the abnormal pixel movement is an object based on the generated magnitude and position information of the abnormal pixel movement.
That is, the left image object recognition part 250 discriminates whether or not a pixel is clustered by grasping the distribution of pixels in abnormal pixel movement, and if a pixel is clustered, the size and position information of the corresponding pixel cluster (for example, center position information of the pixel cluster) can be generated.
Further, the left-side image object recognition part 250 discriminates whether or not the corresponding pixel cluster is an object based on the size and position information of the generated pixel cluster, and supplies the discrimination information to the first warning identification module 260.
As described above, the first blind area detection module 200 does not perform pattern recognition but detects information on the size, position, moving direction, etc. of a moving object by sensing the movement of the object between frames, thereby using a recognition method of whether an object exists in the left blind area, and thus, the blind area recognition operation can be performed in real time without additional learning process.
Also, referring to fig. 4, a second blind zone detection module 300 for discriminating whether an object exists in a right blind zone (RBS of fig. 2) is illustrated.
For reference, the second blind area detection module 300 has the same structure and function as the first blind area detection module 200 shown in fig. 3 except that it searches for a right blind area, and thus, will be briefly described herein.
The second blind area detection module 300 discriminates whether an object exists in a right blind area of the vehicle by using right images among surrounding images of the vehicle respectively captured by the plurality of cameras 100.
Specifically, the second blind area detection module 300 may include a right image selection part 310, a right image mask processing part 320, a right image pixel movement information generation part 330, a right image abnormal pixel movement detection part 340, and a right image object recognition part 350.
The right image selecting unit 310 may select the right image from the surrounding images of the vehicle captured by the plurality of cameras 100, respectively. Further, the right image selecting section 310 supplies the selected right image to the right image mask processing section 320.
The right image mask processing section 320 may specify a region behind the right image selected by the right image selecting section 310.
Specifically, the right image mask processing section 320 may specify the rear region of the right image selected by the right image selecting section 310 by using a mask (not shown) of the rear right region. That is, the right image mask processing unit 320 eliminates unnecessary regions in the right image by masking the rear right region, and specifies only a part of the rear region.
Here, the right-side image mask processing unit 320 may be used as an algorithm to generate the masks of the right-and-left regions together in the process of generating the panoramic image integrated information, taking the mask of the right-and-left region as an example.
That is, as described above, since the panoramic image generator (not shown) can know the distance between each pixel of the image captured during the generation of the panoramic image integrated information and the vehicle, the mask of the rear right area can be generated in the panoramic image generator as in the panoramic image integrated information.
That is, the right image mask processing section 320 specifies the rear region of the right image by using the mask of the rear right region provided by the panoramic image generation section.
The right image pixel movement information generating section 330 may generate pixel movement information of a rear area of the right image specified by the right image mask processing section 320.
Specifically, the right image pixel movement information generating unit 330 generates patch information for a region behind the right image specified by the right image mask processing unit 320, selects a pixel of interest based on the generated patch information, searches for surrounding pixels related to the selected pixel of interest, and generates pixel movement information based on the search result.
That is, the right image pixel movement information generating unit 330 sequentially performs the tile information generating operation, the attention pixel selecting operation, and the surrounding pixel searching operation, and generates pixel movement information on the rear area of the right image.
The right image abnormal pixel movement detecting section 340 detects abnormal pixel movement in the pixel movement information generated by the right image pixel movement information generating section 330.
Specifically, the right image abnormal pixel movement detecting section 340 compares the pixel movement information generated by the right image pixel movement information generating section 330 with the normal pixel movement information generated from the right image of the vehicle at the time of normal running of the vehicle, and detects abnormal pixel movement (i.e., a portion where an error occurs) based on the comparison result.
Here, the right image abnormal pixel movement detecting unit 340 may include normal pixel movement information generated from the right image of the vehicle during normal traveling of the vehicle, and the normal pixel movement information may be stored in the right image abnormal pixel movement detecting unit 340 in advance at a manufacturing stage of the panoramic monitoring system 1.
For reference, the right image abnormal pixel movement detecting part 340 compares the pixel movement information generated by the right image pixel movement information generating part 330 with the pixel movement information generated in the generation process of the panoramic image integrated information, thereby detecting abnormal pixel movement.
Of course, the right image abnormal pixel movement detecting unit 340 detects abnormal pixel movement by comparing the pixel movement information generated by the right image pixel movement information generating unit 330 with both the normal pixel movement information and the pixel movement information generated together in the process of generating the panoramic image integrated information.
For convenience of explanation, in the embodiment of the present invention, the right image abnormal pixel movement detecting unit 340 compares the pixel movement information generated by the right image pixel movement information generating unit 330 with the normal pixel movement information.
The right image object recognition section 350 analyzes the abnormal pixel movement detected by the right image abnormal pixel movement detection section 340 to determine whether the abnormal pixel movement is an object.
Specifically, the right image object recognition section 350 analyzes the abnormal pixel movement detected by the right image abnormal pixel movement detection section 340 to grasp the pixel distribution in the abnormal pixel movement, generates the size and position information of the abnormal pixel movement based on the grasped pixel distribution, and discriminates whether the abnormal pixel movement is an object based on the generated size and position information of the abnormal pixel movement.
That is, the right image object recognition part 350 discriminates whether or not a pixel is clustered by grasping the distribution of pixels in abnormal pixel movement, and if a pixel is clustered, size and position information (for example, center position information of the pixel cluster) about the corresponding pixel cluster can be generated.
Further, the right image object recognition part 350 may recognize whether the corresponding pixel cluster is an object based on the size and position information of the generated pixel cluster, and provide the recognition result to the second warning identification module (360 of fig. 1).
Referring again to fig. 1, the first warning identification module 260 may generate left warning information based on the information about the presence or absence of the object recognized by the first blind spot detection module 200.
Specifically, the first warning identification module 260 generates left warning information based on the discrimination information provided by the first blind spot detection module 200, and outputs the left warning information to the driver if the discrimination information indicates that there is an object (e.g., other vehicle) in the left blind spot of the vehicle.
That is, the first alert presentation module 260 provides the left alert information to a display (not shown) built in the vehicle, and the display may display the provided left alert information.
In addition, the second warning identification module 360 generates right warning information based on the information about the existence of the object identified by the second blind area detection module 300.
Specifically, the second warning identification module 360 generates right warning information based on the discrimination information supplied from the second blind area detection module 300, and outputs the right warning information to the driver in a case where the discrimination result indicates that an object is present in the right blind area of the vehicle.
That is, the second warning identification module 360 provides the right warning information to a display (not shown) built in the vehicle, and the display can display the provided right warning information.
Fig. 5 illustrates displaying warning information (e.g., left side warning information) on a display built into a vehicle.
That is, in the case where the other vehicle C 'exists in the left blind area LBS, the display displays the left warning information a, so that the driver immediately recognizes that the other vehicle C' exists in the left blind area LBS.
In addition, although not shown, the first and second alert identification modules 260 and 360 output alert information in the form of voice through a speaker (not shown) built in the vehicle.
As described above, the panoramic monitoring system 1 according to the present invention can perform the blind area recognition operation in real time without additional learning processes. Further, there is an advantage in that a high-performance processor is not installed since an additional learning process is not required. Furthermore, unlike the blind spot warning system, the camera thereof is not installed in a protruding form under or at the side of the mirror mounting position, and thus the aesthetic degradation can be prevented.
The present invention has been described above, and those skilled in the art to which the present invention pertains can modify and change the components by means of addition, modification, deletion, or addition without departing from the scope of the idea of the present invention described in the claims, and the scope of the present invention is intended to be encompassed by the claims.

Claims (10)

1. A panoramic surveillance system, comprising:
a plurality of cameras installed at front, rear, left, and right sides of a vehicle, respectively, for photographing a surrounding image of the vehicle;
a first blind area detection module that discriminates whether there is an object in a left blind area of the vehicle using left images among surrounding images of the vehicle captured by the plurality of cameras, respectively; and
a second blind area detection module that discriminates whether there is an object in a right blind area of the vehicle using right images among surrounding images of the vehicle captured by the plurality of cameras, respectively.
2. The panoramic surveillance system of claim 1 wherein said first blind spot detection module comprises:
a left image selecting unit that selects the left image from the images of the surroundings of the vehicle captured by the plurality of cameras, respectively;
a left image mask processing unit that specifies a region behind the left image selected by the left image selection unit;
a left image pixel movement information generating unit that generates pixel movement information of a region behind the left image specified by the left image mask processing unit;
a left image abnormal pixel movement detecting unit that detects abnormal pixel movement from the pixel movement information generated by the left image pixel movement information generating unit;
and a left image object recognition unit configured to analyze the abnormal pixel movement detected by the left image abnormal pixel movement detection unit to determine whether the abnormal pixel movement is an object.
3. The panoramic monitoring system of claim 2, wherein,
the left-side image mask processing section specifies a rear area of the left-side image selected by the left-side image selecting section using a mask of the rear left-side area,
the left-side image pixel movement information generating unit generates patch information for a rear area of the left-side image specified by the left-side image mask processing unit, selects a pixel of interest based on the generated patch information, searches for surrounding pixels related to the selected pixel of interest, and generates the pixel movement information based on the search result.
4. The panoramic monitoring system of claim 2, wherein,
the left-image abnormal pixel movement detecting section compares the pixel movement information generated by the left-image pixel movement information generating section with normal pixel movement information generated from a left image of the vehicle when the vehicle is traveling normally, and detects the abnormal pixel movement based on a result of the comparison,
the left-image object recognition section analyzes the abnormal pixel movement detected by the left-image abnormal pixel movement detection section to grasp a pixel distribution in the abnormal pixel movement, generates size and position information of the abnormal pixel movement based on the grasped pixel distribution, and discriminates whether the abnormal pixel movement is an object based on the generated size and position information of the abnormal pixel movement.
5. The panoramic surveillance system of claim 1 wherein said second blind spot detection module comprises:
a right image selection unit configured to select the right image from the surrounding images of the vehicle captured by the plurality of cameras, respectively;
a right image mask processing unit that specifies a region behind the right image selected by the right image selecting unit;
a right image pixel movement information generating unit that generates pixel movement information on a rear area of the right image specified by the right image mask processing unit;
a right image abnormal pixel movement detecting unit that detects abnormal pixel movement from the pixel movement information generated by the right image pixel movement information generating unit;
and a right image object recognition unit configured to analyze the abnormal pixel movement detected by the right image abnormal pixel movement detection unit to determine whether the abnormal pixel movement is an object.
6. The panoramic monitoring system of claim 5, wherein,
the right image mask processing section specifies a rear area of the right image selected by the right image selecting section using a mask of the rear right area,
the right image pixel movement information generating unit generates patch information for a region behind the right image specified by the right image mask processing unit, selects a pixel of interest based on the generated patch information, searches for surrounding pixels related to the selected pixel of interest, and generates the pixel movement information based on the search result.
7. The panoramic monitoring system of claim 5, wherein,
the right-image abnormal pixel movement detecting section compares the pixel movement information generated by the right-image pixel movement information generating section with normal pixel movement information generated from a right-image of the vehicle at the time of normal traveling of the vehicle, and detects the abnormal pixel movement based on the comparison result,
the right-image object recognition section analyzes the abnormal pixel movement detected by the right-image abnormal pixel movement detection section to grasp a pixel distribution in the abnormal pixel movement, generates size and position information of the abnormal pixel movement based on the grasped pixel distribution, and discriminates whether the abnormal pixel movement is an object based on the generated size and position information of the abnormal pixel movement.
8. The panoramic surveillance system of claim 1, further comprising:
a first warning identification module that generates left warning information based on the information on the presence or absence of the object discriminated by the first blind area detection module; and
a second warning identification module that generates right warning information based on the information regarding the presence or absence of the object identified by the second blind area detection module.
9. The panoramic monitoring system of claim 1, wherein,
when an object exists in the left blind area of the vehicle, the first warning identification module outputs left warning information to a driver,
when an object exists in the right blind area of the vehicle, the second warning identification module outputs the right warning information to a driver.
10. The panoramic surveillance system of claim 1 wherein said plurality of cameras comprises:
a first camera installed at a front end of the vehicle for capturing a front image of the vehicle;
a second camera installed at a left side of the vehicle for photographing a left side image of the vehicle;
a third camera installed at a right side of the vehicle for photographing a right side image of the vehicle; and
and the fourth camera is arranged at the rear end of the vehicle and is used for shooting a rear image of the vehicle.
CN201880057385.1A 2017-10-31 2018-10-26 Panoramic monitoring system Pending CN111194450A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2017-0143113 2017-10-31
KR1020170143113A KR102051324B1 (en) 2017-10-31 2017-10-31 Surround view monitoring system
PCT/KR2018/012824 WO2019088591A2 (en) 2017-10-31 2018-10-26 Surround view monitoring system

Publications (1)

Publication Number Publication Date
CN111194450A true CN111194450A (en) 2020-05-22

Family

ID=66333234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880057385.1A Pending CN111194450A (en) 2017-10-31 2018-10-26 Panoramic monitoring system

Country Status (3)

Country Link
KR (1) KR102051324B1 (en)
CN (1) CN111194450A (en)
WO (1) WO2019088591A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610593A (en) * 2019-10-16 2019-12-24 徐州筑之邦工程机械有限公司 Intelligent safety early warning system for mining truck

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002354466A (en) * 2001-05-23 2002-12-06 Nissan Motor Co Ltd Surrounding monitoring device for vehicle
CN103237685A (en) * 2010-12-30 2013-08-07 明智汽车公司 Apparatus and method for displaying a blind spot
CN105719311A (en) * 2014-12-19 2016-06-29 现代摩比斯株式会社 Vehicle System For Detecting Object And Operation Method Thereof
KR20160143595A (en) * 2016-08-31 2016-12-14 (주)캠시스 Apparatus and method for warning a dangerous element of surrounding of vehicle
CN106740470A (en) * 2016-11-21 2017-05-31 奇瑞汽车股份有限公司 A kind of blind area monitoring method and system based on full-view image system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR200325312Y1 (en) * 2003-05-23 2003-09-03 정운기 watch camera monitor of vehicles
KR20110076300A (en) * 2009-12-29 2011-07-06 전자부품연구원 Adaptive multi-mode view system for recognition of vision dead zone based on vehicle information and controlling method for the same
KR101472615B1 (en) * 2010-12-21 2014-12-16 삼성전기주식회사 System and method for warning lane departure
KR20130006752A (en) * 2011-06-23 2013-01-18 주식회사 만도 Lane recognizing apparatus and method thereof
KR101279712B1 (en) * 2011-09-09 2013-06-27 연세대학교 산학협력단 Apparatus and method for providing real-time lane detection, recording medium thereof
DE102015205507B3 (en) * 2015-03-26 2016-09-29 Zf Friedrichshafen Ag Rundsichtsystem for a vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002354466A (en) * 2001-05-23 2002-12-06 Nissan Motor Co Ltd Surrounding monitoring device for vehicle
CN103237685A (en) * 2010-12-30 2013-08-07 明智汽车公司 Apparatus and method for displaying a blind spot
CN105719311A (en) * 2014-12-19 2016-06-29 现代摩比斯株式会社 Vehicle System For Detecting Object And Operation Method Thereof
KR20160143595A (en) * 2016-08-31 2016-12-14 (주)캠시스 Apparatus and method for warning a dangerous element of surrounding of vehicle
CN106740470A (en) * 2016-11-21 2017-05-31 奇瑞汽车股份有限公司 A kind of blind area monitoring method and system based on full-view image system

Also Published As

Publication number Publication date
WO2019088591A2 (en) 2019-05-09
KR20190048285A (en) 2019-05-09
KR102051324B1 (en) 2019-12-03
WO2019088591A3 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
US11393217B2 (en) Vehicular vision system with detection and tracking of objects at the side of a vehicle
EP1930863B1 (en) Detecting and recognizing traffic signs
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
EP2924653B1 (en) Image processing apparatus and image processing method
EP1030188B1 (en) Situation awareness system
US7366325B2 (en) Moving object detection using low illumination depth capable computer vision
EP2523174B1 (en) Vision based night-time rear collision warning system, controller, and method of operating the same
EP3493152A1 (en) Image processing device and surroundings recognition device
US20170024622A1 (en) Surrounding environment recognition device
US20090303026A1 (en) Apparatus, method for detecting critical areas and pedestrian detection apparatus using the same
CN101396989A (en) Vehicle periphery monitoring apparatus and image displaying method
US20180114078A1 (en) Vehicle detection device, vehicle detection system, and vehicle detection method
JP2005309797A (en) Warning device for pedestrian
US6549124B1 (en) Environment monitoring system for a vehicle with an image pickup device
JP2009234344A (en) Adjustment device for photographing means and object detection device
CN113808418A (en) Road condition information display system, method, vehicle, computer device and storage medium
US20120189161A1 (en) Visual attention apparatus and control method based on mind awareness and display apparatus using the visual attention apparatus
KR20150018990A (en) Apparatus and method for guiding caution information of driving
CN111194450A (en) Panoramic monitoring system
EP3081433A1 (en) An improved camera module for vehicle
JP6891082B2 (en) Object distance detector
CN111133439B (en) Panoramic monitoring system
WO2021250934A1 (en) Image processing device and image processing method
US20230106188A1 (en) Vehicular vision system with object classification based on change in orientation of object
JP2011090490A (en) Obstacle recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200522