TW202241114A - Method for determining degree of crowding in vehicle, and system for determining degree of crowding in vehicle - Google Patents

Method for determining degree of crowding in vehicle, and system for determining degree of crowding in vehicle Download PDF

Info

Publication number
TW202241114A
TW202241114A TW111111019A TW111111019A TW202241114A TW 202241114 A TW202241114 A TW 202241114A TW 111111019 A TW111111019 A TW 111111019A TW 111111019 A TW111111019 A TW 111111019A TW 202241114 A TW202241114 A TW 202241114A
Authority
TW
Taiwan
Prior art keywords
aforementioned
vehicle
area
congestion
degree
Prior art date
Application number
TW111111019A
Other languages
Chinese (zh)
Other versions
TWI781069B (en
Inventor
齊藤大輝
塩田隆司
矢農正紀
Original Assignee
日商矢崎總業股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日商矢崎總業股份有限公司 filed Critical 日商矢崎總業股份有限公司
Application granted granted Critical
Publication of TWI781069B publication Critical patent/TWI781069B/en
Publication of TW202241114A publication Critical patent/TW202241114A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Train Traffic Observation, Control, And Security (AREA)

Abstract

This method for determining the degree of crowding in a vehicle comprises: a step for acquiring an image obtained by capturing the interior 101 of a vehicle transporting passengers; a step for detecting people P3, P4, P5 from the image and generating, for the detected people, rectangular detection regions DA3, DA4, DA5 that surround the people, respectively; and a step for determining the degree of crowding in the vehicle on the basis of a subject region Tg of the image and the detection regions DA3, DA4, DA5. The subject region is a partial region of the image, and a region in which passengers standing in an aisle 103 of the vehicle are imaged. In the determining step, the degree of crowding in the vehicle is determined on the basis of the number of detection regions that overlap with the subject region.

Description

車輛之擁擠度判定方法、及車輛之擁擠度判定系統Vehicle congestion degree judgment method, and vehicle congestion degree judgment system

本發明係關於車輛之擁擠度判定方法、及車輛之擁擠度判定系統。The present invention relates to a vehicle congestion degree judging method and a vehicle congestion degree judging system.

迄今,已有監視車輛的乘車人數的裝置。在專利文獻1中,已揭示具備第一攝像裝置及處理單元,且安裝於車體內而監視車體內的乘車人數的車輛乘車人數之監視裝置。專利文獻1之動態偵測模組於影像變動値大於閾値的情形,判定為在辨識區塊中有人。Heretofore, there have been devices for monitoring the number of occupants of a vehicle. Patent Document 1 discloses a vehicle occupancy monitoring device that includes a first imaging device and a processing unit, is installed in a vehicle body, and monitors the number of occupants in the vehicle body. The motion detection module of Patent Document 1 determines that there is a person in the identification block when the image variation value is greater than the threshold value.

[先前技術文獻] [專利文獻] [專利文獻1]特開2015-7953號公報 [Prior Art Literature] [Patent Document] [Patent Document 1] JP-A-2015-7953

[發明所欲解決的課題] 從近年來的新型冠狀病毒(COVID-19)等之感染症對策的觀點來看(有冠狀病毒感染/冠狀病毒感染之後),亦期望可適當地判定輸送旅客的車輛擁擠度的技術。例如,較佳為即使於走道上站立多數旅客的情形,亦可基於拍攝到的車內影像,適當地判定擁擠度。 [Problems to be Solved by the Invention] From the viewpoint of countermeasures against infectious diseases such as the new coronavirus (COVID-19) in recent years (coronavirus infection/post-coronavirus infection), technology that can appropriately determine the degree of congestion of vehicles transporting passengers is also desired. For example, even when many passengers are standing on the aisle, it is preferable to appropriately determine the degree of congestion based on the captured in-vehicle images.

本發明之目的係提供可適當地判定輸送旅客的車輛擁擠度的車輛之擁擠度判定方法、及車輛之擁擠度判定系統。An object of the present invention is to provide a vehicle congestion degree determination method and a vehicle congestion degree determination system which can appropriately determine the congestion degree of vehicles transporting passengers.

[用以解決課題之手段] 本發明之車輛擁擠度判定方法之特徵為包含:取得拍攝到的輸送旅客的車輛之車內影像的步驟、從前述影像偵測人並對於每個偵測到的前述人產生包圍該人的矩形的偵測區域的步驟、以及基於前述影像的對象區域及前述偵測區域而判定前述車輛的擁擠度的步驟,前述對象區域為前述影像的一部分區域,且為站立於前述車輛的走道的旅客被拍攝到的區域,於前述判定步驟,基於與前述對象區域重疊的前述偵測區域的數量,而判定前述車輛的擁擠度。 [Means to solve the problem] The method for judging vehicle congestion degree of the present invention is characterized in that it includes: the step of obtaining the photographed in-vehicle image of the vehicle transporting passengers, detecting people from the aforementioned image, and generating a rectangle surrounding the person for each detected aforementioned person The step of detecting the detection area, and the step of determining the degree of congestion of the aforementioned vehicle based on the target area of the aforementioned image and the aforementioned detection area, the aforementioned target area is a part of the aforementioned image, and is where passengers standing on the aisle of the aforementioned vehicle are captured For the captured area, in the determining step, the degree of congestion of the vehicles is determined based on the number of the detection areas overlapping with the target area.

[發明之效果] 關於本發明的車輛之擁擠度判定方法包含:從影像偵測人並對於每個偵測到的人產生包圍該人的矩形的偵測區域的步驟、以及基於影像的對象區域及偵測區域而判定車輛的擁擠度的步驟。對象區域為影像的一部分區域,且為站立於車輛的走道的旅客被拍攝到的區域。於判定步驟,基於與對象區域重疊的偵測區域的數量,而判定車輛的擁擠度。依據關於本發明的車輛之擁擠度判定方法,達到所謂可適當地判定輸送旅客的車輛之擁擠度的效果。 [Effect of Invention] The method for judging the degree of congestion of a vehicle according to the present invention includes: detecting a person from an image and generating a rectangular detection area surrounding the person for each detected person; A step of determining the degree of congestion of vehicles. The target area is a part of the image, and is an area in which a passenger standing on the aisle of the vehicle is photographed. In the determining step, the degree of congestion of vehicles is determined based on the number of detection areas overlapping with the target area. According to the method of judging the degree of congestion of vehicles related to the present invention, it is possible to appropriately determine the degree of congestion of vehicles transporting passengers.

以下,對於關於本發明之實施形態的車輛之擁擠度判定方法、及車輛之擁擠度判定系統,一邊參照圖式一邊進行詳細説明。又,本發明並未藉由此實施形態而被限定。又,下述實施形態中的構成要素中有包含所屬技術領域中具通常知識者可容易置換或實質上相同者。Hereinafter, the vehicle congestion level judging method and the vehicle congestion level judging system according to the embodiments of the present invention will be described in detail with reference to the drawings. In addition, this invention is not limited by this embodiment. In addition, among the components in the following embodiments, those with ordinary knowledge in the technical field may be easily substituted or those that are substantially the same are included.

[實施形態] 參照圖1至圖5,針對實施形態進行説明。本實施形態係關於車輛之擁擠度判定方法、及車輛之擁擠度判定系統。圖1為顯示關於實施形態的車輛之擁擠度判定系統的方塊圖,圖2為顯示藉由第一攝相機所拍攝到的影像的圖,圖3為顯示藉由第二攝相機所拍攝到的影像的圖,圖4為針對重疊度的判定進行説明的圖,圖5為顯示實施形態之動作的流程圖。 [Implementation form] Embodiments will be described with reference to FIGS. 1 to 5 . This embodiment relates to a vehicle congestion degree judging method and a vehicle congestion degree judging system. Fig. 1 is a block diagram showing a system for judging the degree of congestion of a vehicle according to an embodiment, Fig. 2 is a diagram showing images captured by a first camera, and Fig. 3 is a diagram showing images captured by a second camera In the diagram of the image, FIG. 4 is a diagram for explaining the determination of the overlapping degree, and FIG. 5 is a flow chart showing the operation of the embodiment.

如圖1所示,車輛之擁擠度判定系統1包含車載器2、攝相機3、及工作站PC7。車載器2及攝相機3安裝於輸送旅客的車輛100中。本實施形態之車輛100為公共巴士,例如,行走預定路線的路線公車。車輛100於路線上的各停靠站使旅客上下車。車輛100除了車載器2及攝相機3,還安裝門感測器4。As shown in FIG. 1 , a system 1 for judging the degree of congestion of a vehicle includes a vehicle-mounted device 2 , a camera 3 , and a workstation PC7. The vehicle-mounted device 2 and the camera 3 are installed in a vehicle 100 for transporting passengers. The vehicle 100 of this embodiment is a public bus, for example, a route bus running a predetermined route. Vehicles 100 pick and drop passengers at various stops along the route. The vehicle 100 is equipped with a door sensor 4 in addition to the vehicle-mounted device 2 and the camera 3 .

車載器2為例如記錄車輛100之運行狀況的行車記錄器或數位式行車記錄器。車載器2具有CPU21、記憶體22、GPS接收部23、及通訊部24。CPU21為進行各種演算的演算裝置。CPU21係例如按照儲存於記憶體22的程式而執行本實施形態之動作。The on-vehicle device 2 is, for example, a driving recorder or a digital driving recorder that records the operating conditions of the vehicle 100 . The vehicle-mounted device 2 has a CPU 21 , a memory 22 , a GPS receiving unit 23 , and a communication unit 24 . The CPU 21 is a calculation device that performs various calculations. The CPU 21 executes the operations of the present embodiment according to programs stored in the memory 22, for example.

記憶體22為包含揮發性記憶體及不揮發性記憶體的記憶部。GPS接收部23係基於從衛星發送的訊號而算出車輛100的現在位置。通訊部24為與無線基地台5之間進行通訊的通訊模組。通訊部24按照CPU21的指令,透過車輛100的天線而與無線基地台5進行無線通訊。The memory 22 is a memory unit including a volatile memory and a nonvolatile memory. The GPS receiver 23 calculates the current position of the vehicle 100 based on a signal transmitted from a satellite. The communication unit 24 is a communication module for communicating with the wireless base station 5 . The communication unit 24 performs wireless communication with the wireless base station 5 through the antenna of the vehicle 100 according to the instruction of the CPU 21 .

攝相機3為拍攝車輛100的車內而輸出影像G1、G2的攝像裝置。攝相機3的位置及攝相機3的視角被設定成可拍攝車內的旅客。本實施形態之車輛100具有作為攝相機3之第一攝相機31及第二攝相機32。第一攝相機31拍攝車內的下車口附近。第二攝相機32拍攝車內的上車口附近。車載器2取得藉由攝相機3拍攝到的影像而儲存於記憶體22。The camera 3 is an imaging device that captures the interior of the vehicle 100 and outputs images G1 and G2. The position of the camera 3 and the angle of view of the camera 3 are set so as to be able to photograph passengers in the car. The vehicle 100 of the present embodiment has a first camera 31 and a second camera 32 as the camera 3 . The first camera 31 photographs the vicinity of the exit gate inside the vehicle. The second camera 32 photographs the vicinity of the boarding gate inside the vehicle. The in-vehicle device 2 acquires images captured by the camera 3 and stores them in the memory 22 .

門感測器4為偵測打開及關閉車輛100的上下車口的門的狀態的感測器。門感測器4例如偵測門是否為全關狀態。門感測器4例如被各自配置於上車口門及下車口門。門感測器4之偵測結果被輸送到CPU21。The door sensor 4 is a sensor that detects the state of opening and closing the door of the vehicle 100 . The door sensor 4 detects whether the door is fully closed, for example. The door sensors 4 are arranged, for example, on each of the entrance door and the exit door. The detection result of the door sensor 4 is delivered to the CPU21.

車載器2透過通訊部24而將判定擁擠度用的數據發送至外部。發送的數據包含例如,藉由攝相機3所拍攝到的影像、該影像的拍攝時刻、拍攝時刻中的車輛100的位置、及車輛100的識別碼等。通訊部24所發送的影像數據為例如靜止影像的數據。發送的數據例如被保存於與網際網路NW連接的伺服器6。The vehicle-mounted device 2 transmits the data for judging the degree of congestion to the outside through the communication unit 24 . The transmitted data includes, for example, an image captured by the camera 3 , the shooting time of the image, the position of the vehicle 100 at the shooting time, the identification code of the vehicle 100 , and the like. The image data transmitted by the communication unit 24 is, for example, still image data. The transmitted data is stored in the server 6 connected to the Internet NW, for example.

工作站PC7,例如,由設置於企業等的工作站的通用電腦裝置而構成。工作站PC7具有管理為管理對象的車輛100之運行狀況的機能、及判定車輛100之擁擠度的機能。工作站PC7具有CPU71、記憶體72、通訊部73、及外部輸入介面74。工作站PC7透過通訊部73及網際網路NW而分別與伺服器6及無線基地台5進行通訊。工作站PC7,例如,按照從記憶體72讀取到CPU71的程式,執行運行狀況的管理及擁擠度的判定。The workstation PC7 is constituted by, for example, a general-purpose computer device installed in a workstation of a company or the like. The workstation PC7 has a function of managing the operation status of the vehicle 100 to be managed and a function of determining the degree of congestion of the vehicle 100 . The workstation PC7 has a CPU 71 , a memory 72 , a communication unit 73 , and an external input interface 74 . The workstation PC7 communicates with the server 6 and the wireless base station 5 respectively through the communication unit 73 and the Internet NW. The workstation PC7 executes the management of the operation status and the determination of the degree of congestion, for example, according to the program read from the memory 72 to the CPU 71 .

在圖2中,顯示藉由第一攝相機31所拍攝到的影像G1的一例。在影像G1中,車輛100之車內101被拍攝。更詳言之,在影像G1中,駕駛座102、走道103、及下車口105被拍攝。下車口105被被置於車輛100的前部,藉由下車門104而打開及關閉。In FIG. 2 , an example of the image G1 captured by the first camera 31 is shown. In the image G1, the interior 101 of the vehicle 100 is photographed. More specifically, in the image G1, the driver's seat 102, the aisle 103, and the exit 105 are photographed. The drop gate 105 is disposed at the front of the vehicle 100 and is opened and closed by the drop door 104 .

在圖3中,顯示藉由第二攝相機32所拍攝到的影像G2之一例。於影像G2,車輛100之車內101被拍攝。更詳言之,於影像G2,走道103、上車口107、後部座位108、及前部座位109被拍攝。上車口107被配置於車輛100的中間部,藉由上車門106而打開及關閉。後部座位108為被配置於較上車口107更靠車輛後方的座位。後部座位108朝向車輛前方。前部座位109為被配置於較上車口107更靠車輛前方的座位。前部座位109朝向車寬方向。In FIG. 3 , an example of the image G2 captured by the second camera 32 is shown. In the image G2, the interior 101 of the vehicle 100 is photographed. More specifically, in the image G2, the aisle 103, the boarding gate 107, the rear seats 108, and the front seats 109 are photographed. The entrance door 107 is arranged in the middle of the vehicle 100 and is opened and closed by the entrance door 106 . The rear seat 108 is a seat arranged at the rear of the vehicle than the boarding gate 107 . The rear seats 108 face toward the front of the vehicle. The front seat 109 is a seat arranged in front of the vehicle than the boarding gate 107 . The front seats 109 face the vehicle width direction.

車載器2發送旅客上下車完成後所拍攝到的影像G1、G2。例如,車載器2將於下車門104及上車門106兩者為全關狀態時拍攝到的影像G1、G2發送到網際網路NW。由旅客的流動少時拍攝到的影像G1、G2,成為可精密度佳地推定車內101之擁擠度。The vehicle-mounted device 2 sends the images G1 and G2 captured after the passengers get on and off the vehicle. For example, the in-vehicle device 2 transmits the images G1 and G2 captured when the lower door 104 and the upper door 106 are fully closed to the Internet NW. The images G1 and G2 captured when the flow of passengers is small can estimate the degree of congestion in the car 101 with high precision.

工作站PC7由伺服器6取得用以判定車輛100之擁擠度的數據。工作站PC7透過通訊部73取得數據,並將取得的數據保存於記憶體72。CPU71具有偵測部71a、抽出部71b、及判定部71c。The workstation PC7 acquires data for determining the degree of congestion of the vehicle 100 from the server 6 . The workstation PC7 acquires data through the communication unit 73 and saves the acquired data in the memory 72 . The CPU 71 has a detection unit 71a, an extraction unit 71b, and a determination unit 71c.

偵測部71a、抽出部71b、及判定部71c可為由CPU71執行的程式的一部分,亦可為CPU71具有的電路的一部分,或者也可為安裝在CPU71上的晶片等。即,CPU71以可執行作為偵測部71a的機能、作為抽出部71b之機能、及作為判定部71c之機能的方式構成即可。The detecting unit 71a, the extracting unit 71b, and the determining unit 71c may be part of a program executed by the CPU 71, may be part of a circuit included in the CPU 71, or may be a chip mounted on the CPU 71 or the like. That is, the CPU 71 may be configured to be able to execute the function as the detection unit 71a, the function as the extraction unit 71b, and the function as the determination unit 71c.

關於本實施形態的車輛之擁擠度判定系統基於二個指標而判定車輛100之擁擠度。第一個指標為旅客的密集度。CPU71由影像G1、G2而偵測人,算出走道103中的旅客的密集度。第二個指標為旅客的總偵測數。CPU71算出從影像G2偵測到的旅客的總數。即,密集度從影像G1、G2兩者算出,總偵測數從影像G2算出。關於本實施形態的車輛之擁擠度判定系統1可基於旅客的密集度及旅客的總偵測數,而準確度更佳地判定車輛100之擁擠度。The vehicle congestion degree judging system of the present embodiment judges the congestion degree of the vehicle 100 based on two indexes. The first indicator is the density of passengers. The CPU 71 detects people from the images G1 and G2 and calculates the density of passengers in the aisle 103 . The second indicator is the total number of passenger detections. The CPU 71 calculates the total number of passengers detected from the image G2. That is, the density is calculated from both the images G1 and G2, and the total number of detections is calculated from the image G2. The vehicle congestion degree judging system 1 of this embodiment can judge the congestion degree of the vehicle 100 more accurately based on the density of passengers and the total number of passengers detected.

偵測部71a從影像G1、G2而偵測人,對各自偵測的人產生偵測區域DA。例如,偵測部71a,如圖2所示,對各自偵測的人產生偵測區域DA。偵測區域DA為所謂的定界框(bounding box)。在圖2中,有偵測到三人,人P0、人P1、人P2。偵測部71a產生包圍人P0的偵測區域DA0、包圍人P1的偵測區域DA1、及包圍人P2的偵測區域DA2。偵測部71a偵測包含站立的人及坐著的人之全部人,對於偵測到的全部人,產生偵測區域DA。The detection part 71a detects a person from the images G1 and G2, and generates a detection area DA for each detected person. For example, the detection unit 71a, as shown in FIG. 2 , generates a detection area DA for each detected person. The detection area DA is a so-called bounding box. In FIG. 2, three persons are detected, person P0, person P1, and person P2. The detection unit 71a generates a detection area DA0 surrounding the person P0, a detection area DA1 surrounding the person P1, and a detection area DA2 surrounding the person P2. The detection unit 71a detects all persons including standing persons and sitting persons, and generates a detection area DA for all detected persons.

偵測部71a,如圖3所示,對於從影像G2偵測到的P3、P4、P5,產生偵測區域DA3、DA4、DA5。又,偵測區域DA為被辨識為在影像中有拍到人的區域,有可能為不是包圍全部人的區域。As shown in FIG. 3 , the detecting unit 71 a generates detection areas DA3 , DA4 , and DA5 for P3 , P4 , and P5 detected from the image G2 . In addition, the detection area DA is an area recognized as a person captured in the video, and may not be an area surrounding all the people.

抽出部71b係從產生的全部的偵測區域DA,抽出設為密集度的算出對象的偵測區域DA、及設為總偵測數的算出對象的偵測區域DA。車輛100的駕駛員被排除於密集度的算出對象,且亦從總偵測數的算出對象排除。於圖2偵測到的人P0,為車輛100的駕駛員。抽出部71b如以下説明,辨識駕駛員,並將駕駛員從算出對象排除。The extraction unit 71b extracts the detection area DA to be calculated as the density and the detection area DA to be calculated as the total number of detections from all the generated detection areas DA. The driver of the vehicle 100 is excluded from the calculation object of the density, and is also excluded from the calculation object of the total detection number. The person P0 detected in FIG. 2 is the driver of the vehicle 100 . The extraction unit 71b recognizes the driver and excludes the driver from the calculation target as described below.

如圖2所示,影像G1被等分成A11至A19的九個區域。各區域的形狀為長方形。例示的影像G1中,各區域的影像橫向方向的像素數為240,影像縱向方向的像素數為160。即,各區域以38,400像素而構成。As shown in FIG. 2 , the image G1 is equally divided into nine regions A11 to A19 . The shape of each area is a rectangle. In the illustrated image G1, the number of pixels in the horizontal direction of the image in each region is 240, and the number of pixels in the vertical direction of the image is 160. That is, each area is constituted by 38,400 pixels.

於影像G1,區域A11、A14、A17被設定為拍攝駕駛員的區域。於偵測區域DA的中心座標位於區域A11、A14、A17的任一位置的情形,抽出部71b判斷該偵測區域DA對應駕駛員。於圖2,偵測區域DA0的中心位於區域A14。即,可判斷對應偵測區域DA0的人A0為駕駛員。因此,抽出部71b從密集度及總偵測數的算出對象將偵測區域DA0排除。另一方面,偵測區域DA1、DA2的中心未位於區域A11、A14、A17之任一位置。抽出部71b採用偵測區域DA1、DA2作為密集度的算出對象。由於偵測區域DA1、DA2為影像G1的區域,而非總偵測數的算出對象。In the image G1, areas A11, A14, and A17 are set as areas where the driver is photographed. When the central coordinates of the detection area DA are located in any of the areas A11, A14, and A17, the extracting unit 71b judges that the detection area DA corresponds to the driver. In FIG. 2 , the center of the detection area DA0 is located in the area A14 . That is, it can be determined that the person A0 corresponding to the detection area DA0 is the driver. Therefore, the extraction unit 71b excludes the detection area DA0 from the calculation target of the density and the total number of detections. On the other hand, the centers of the detection areas DA1, DA2 are not located in any of the areas A11, A14, A17. The extraction unit 71b uses the detection areas DA1 and DA2 as objects of density calculation. Since the detection areas DA1 and DA2 are the areas of the image G1, they are not the calculation objects of the total detection number.

如圖3所示,影像G2被等分成A21至A29的九個區域。各區域的形狀為長方形。例示的影像G2中,各區域的影像橫向方向的像素數為240,影像縱向方向的像素數為160。即,各區域以38,400像素而構成。As shown in FIG. 3 , the image G2 is equally divided into nine regions A21 to A29 . The shape of each area is a rectangle. In the illustrated image G2, the number of pixels in the horizontal direction of the image in each region is 240, and the number of pixels in the vertical direction of the image is 160. That is, each area is constituted by 38,400 pixels.

影像G2不包含駕駛員被拍攝的區域。因此,抽出部71b對於影像G2,並不執行偵測區域DA是否對應駕駛員的判定。於影像G2,區域A23、A26、A29為坐著的旅客被拍攝的區域。於以下的説明,將坐著的旅客被拍攝的區域稱為「第一區域」。於偵測區域DA的中心座標位於第一區域的情形,抽出部71b判斷該偵測區域DA為對應坐著的旅客的區域。於圖3,偵測區域DA5的中心位於區域A29。即,可判斷偵測的人P5為坐著的旅客。Image G2 does not contain the area where the driver is photographed. Therefore, the extraction unit 71b does not perform the determination of whether the detection area DA corresponds to the driver for the image G2. In the image G2, the areas A23, A26, and A29 are the areas where the sitting passengers are photographed. In the following description, the area where the sitting passenger is photographed is referred to as "the first area". When the center coordinate of the detection area DA is located in the first area, the extracting unit 71b judges that the detection area DA is an area corresponding to the seated passenger. In FIG. 3 , the center of the detection area DA5 is located in the area A29 . That is, it can be determined that the detected person P5 is a seated passenger.

坐著的旅客與走道103的密集度的關連性低。抽出部71b從密集度的算出對象中排除偵測區域DA5。另一方面,抽出部71採用偵測區域DA5作為總偵測數的算出對象。The correlation between the sitting passengers and the density of the aisle 103 is low. The extraction unit 71b excludes the detection area DA5 from the density calculation target. On the other hand, the extraction unit 71 adopts the detection area DA5 as the calculation object of the total detection number.

偵測區域DA3、DA4的中心亦未位於區域A23、A26、A29之任一者。因此,抽出部71b採用偵測區域DA3、DA4作為密集度及總偵測數的算出對象。The centers of the detection areas DA3, DA4 are not located in any of the areas A23, A26, A29. Therefore, the extraction unit 71b adopts the detection areas DA3 and DA4 as calculation objects of the density and the total number of detections.

判定部71c算出密集度及總偵測數,基於算出結果,判定車輛100之擁擠度。本實施形態之密集度係如以下方式算出。於圖2所示影像G1,密集度被算出的區域為區域A15。區域A15為站立於走道103中的下車口105側的端部的旅客被拍攝的區域。The determination unit 71c calculates the density and the total number of detections, and determines the degree of congestion of the vehicles 100 based on the calculation results. The density of this embodiment is calculated as follows. In the image G1 shown in FIG. 2 , the area where the density is calculated is the area A15 . The area A15 is an area where passengers standing at the end of the aisle 103 on the side of the exit gate 105 are photographed.

判定部71c將區域A15設定成對象區域Tg。判定部71c針對各偵測區域DA,判定與對象區域Tg重疊的像素數是否為閾値Th1以上。例如,如圖4所示,判定部71c偵測對象區域Tg與偵測區域DA1重疊的區域Ov1。於區域Ov1的像素數為閾値Th1以上的情形,判定部71c增加於對象區域Tg的計數數目C0。計數數目C0的初期値為0。閾値Th1,例如,為1,024[像素]。即,區域Ov1的像素數為1,024以上的情形,偵測區域DA1被計數作為密集的構成要素之一者。The determination unit 71c sets the area A15 as the target area Tg. The determination unit 71c determines whether or not the number of pixels overlapping the target area Tg is equal to or greater than a threshold value Th1 for each detection area DA. For example, as shown in FIG. 4 , the determination unit 71 c detects an area Ov1 in which the target area Tg overlaps with the detection area DA1 . When the number of pixels in the region Ov1 is equal to or greater than the threshold Th1, the determination unit 71c increases the count number C0 in the target region Tg. The initial value of the count number C0 is 0. The threshold Th1 is, for example, 1,024 [pixels]. That is, when the number of pixels of the area Ov1 is 1,024 or more, the detection area DA1 is counted as one of the densely packed components.

判定部71c對於其它偵測區域DA亦同樣地偵測與對象區域Tg重疊的區域,且若重疊的區域之像素數為閾値Th1以上,增加於對象區域Tg的計數數目C0。判定部71c對於密集度的算出對象的全部之偵測區域DA,判定與對象區域Tg的重疊度。判定部71c將算出的計數數目C0儲存為於區域A15的計數數目C15。之後,判定部71c將計數數目C0初始化。The determination unit 71c similarly detects an area overlapping with the target area Tg with respect to another detection area DA, and increases the count number C0 of the target area Tg when the number of pixels in the overlapping area is equal to or greater than the threshold Th1. The determination part 71c determines the degree of overlap with the target area Tg with respect to all the detection areas DA which are the calculation target of density|concentration. The determination unit 71c stores the calculated count number C0 as the count number C15 in the area A15. Thereafter, the determination unit 71c initializes the count number C0.

再者,判定部71c對於影像G2判定重疊度。於圖3所示的影像G2,對象區域Tg為區域A25及區域A28。區域A25及區域A28為站立於走道103中的上車口107附近的旅客被拍攝的區域。Furthermore, the determination unit 71c determines the degree of overlap with respect to the video G2. In the image G2 shown in FIG. 3 , the target area Tg is an area A25 and an area A28 . Area A25 and area A28 are areas where passengers standing near boarding gate 107 in aisle 103 are photographed.

判定部71c將一個區域A25設定為對象區域Tg。判定部71c判定於影像G2的各偵測區域DA與對象區域Tg重疊的像素數是否為閾値Th2以上。閾値Th2,例如,為3,072[像素]。於對象區域Tg與偵測區域DA重疊的區域的像素數為閾値Th2以上的情形,判定部71c增加於對象區域Tg的計數數目C0。判定部71c於密集度的算出對象的全部偵測區域DA,判定與對象區域Tg的重疊度。判定部71c將算出的計數數目C0儲存為於區域A25的計數數目C25。之後,判定部71c將計數數目C0初始化。The determination unit 71c sets one area A25 as the target area Tg. The judging unit 71c judges whether or not the number of pixels in which each detection area DA of the image G2 overlaps with the target area Tg is equal to or greater than a threshold Th2. The threshold Th2 is, for example, 3,072 [pixels]. When the number of pixels in the overlapping area of the target area Tg and the detection area DA is equal to or greater than the threshold Th2, the determination unit 71c increases the count number C0 of the target area Tg. The determination unit 71c determines the degree of overlap with the target area Tg in all the detection areas DA that are the calculation target of the density. The determination unit 71c stores the calculated count number C0 as the count number C25 in the area A25. Thereafter, the determination unit 71c initializes the count number C0.

判定部71c將區域A28設定為對象區域Tg,同樣地算出計數數目C0。判定部71c將算出的計數數目C0儲存為於區域A28的計數數目C28。判定部71c對於影像G2,算出旅客的總偵測數。總偵測數CA為對於影像G2產生的偵測區域DA的個數。圖3所例示的影像G2中,偵測區域DA3、DA4、DA5為總偵測數的算出對象。即,總偵測數CA為3。The determination unit 71c sets the area A28 as the target area Tg, and similarly calculates the count number C0. The determination unit 71c stores the calculated count number C0 as the count number C28 in the area A28. The determination unit 71c calculates the total detection number of passengers with respect to the image G2. The total detection number CA is the number of detection areas DA generated for the image G2. In the image G2 illustrated in FIG. 3 , the detection areas DA3 , DA4 , and DA5 are the calculation objects of the total detection number. That is, the total number of detections CA is three.

CPU71基於圖5所示的流程圖而判定擁擠度。CPU71執行取得影像G1、G2的步驟後,執行圖5的流程圖。The CPU 71 determines the degree of congestion based on the flowchart shown in FIG. 5 . After the CPU 71 executes the steps of acquiring the images G1 and G2, the flowchart in FIG. 5 is executed.

於步驟S10,偵測部71a對於影像G1、G2執行物體偵測,計算偵測區域DA。步驟S10為從影像G1、G2偵測人,對各自偵測的人,產生包圍該人的矩形的偵測區域DA的步驟。步驟S10被執行時,進展到步驟S20。In step S10, the detection unit 71a performs object detection on the images G1 and G2, and calculates the detection area DA. Step S10 is a step of detecting persons from the images G1 and G2, and generating a rectangular detection area DA surrounding the persons detected respectively. When step S10 is executed, proceed to step S20.

於步驟S20,抽出部71b抽出設為密集度的算出對象的偵測區域DA、及設為總偵測數的算出對象的偵測區域DA。步驟S20被執行時,進展到步驟S30。In step S20, the extraction part 71b extracts the detection area DA which is the calculation object of density|concentration, and the detection area DA which is the calculation object of the total number of detections. When step S20 is executed, proceed to step S30.

於步驟S30,判定部71c計算偵測區域DA和對象區域Tg的重複。判定部71c對於各自的區域A15、A25、A28,算出計數數目C15、C25、C28。步驟S30被執行時,進展到步驟S40。In step S30, the determination unit 71c calculates the overlap between the detection area DA and the target area Tg. The determination unit 71c calculates the count numbers C15, C25, and C28 for the respective areas A15, A25, and A28. When step S30 is executed, proceed to step S40.

於步驟S40,判定部71c判定擁擠度的指標是否滿足水平II的條件。關於本實施形態的車輛之擁擠度判定系統1將擁擠度的水平分類成水平I、水平II、及水平III之三種。水平I為最低擁擠度的水平,水平III為最高擁擠度的水平。水平I,例如,站立的旅客為2名以下之擁擠度。又,嬰兒車、想要坐下的旅客、及公共汽車公司的員工不包含於「站立的旅客」的對象。水平II,例如,站立的旅客為3名以上,且走道103的中央部分為未使用之擁擠度。水平III,例如,走道103的中央部分有旅客站立的擁擠度。又,走道103的中央部分為例如走道103中的上車門106和下車門104之間的中間部分。In step S40, the determination part 71c determines whether the index of congestion degree satisfies the condition of level II. The congestion degree judging system 1 for vehicles according to the present embodiment classifies the congestion degree into three types: level I, level II, and level III. Level I is the least congested level, and Level III is the most congested level. Level I, for example, the crowding degree of standing passengers is less than 2. In addition, baby strollers, passengers who want to sit down, and employees of bus companies are not included in the object of "standing passengers". In level II, for example, there are more than 3 passengers standing, and the central part of the aisle 103 is unused. Level III, for example, the central part of the aisle 103 has congestion of passengers standing. Also, the central portion of the aisle 103 is, for example, an intermediate portion between the upper door 106 and the lower door 104 in the aisle 103 .

將水平II的條件稱為第一條件。第一條件係下述式(1)成立的條件。即,於「區域A15的計數數目C15和區域A28的計數數目C28之合計値為2以上」的情形,判定擁擠度為水平II以上。水平I的條件為「計數數目C15和計數數目C28的合計値小於2」。判定部71c於第一條件成立的情形,於步驟S40作出肯定判定而進展到步驟S50。判定部71c,於步驟S40作出否定判定的情形,進展到步驟S80。 C15+C28≧2 (1) The condition of level II is referred to as the first condition. The first condition is a condition that the following formula (1) holds. That is, in the case of "the total value of the count number C15 of the area A15 and the count number C28 of the area A28 is 2 or more", it is determined that the degree of congestion is level II or more. The condition of the level I is "the total value of the count number C15 and the count number C28 is less than 2". When the first condition is satisfied, the determination unit 71c makes an affirmative determination in step S40 and proceeds to step S50. If the determination part 71c makes a negative determination in step S40, it progresses to step S80. C15+C28≧2 (1)

於步驟S50,判定部71c判定擁擠度的指標是否滿足水平III的條件。水平III的條件係除了第一條件之外還要第二條件成立。第二條件為下述式(2)、式(3)、式(4)、及式(5)全部成立。即,第一條件成立,且「區域A15的計數數目C15為1以上,區域A25的計數數目C25及區域A28的計數數目C28皆為2以上,且總偵測數CA為11以上」的情形,判定擁擠度為水平III。 C15≧1 (2) C25≧2 (3) C28≧2 (4) CA≧11 (5) In step S50, the determination unit 71c determines whether or not the index of the degree of congestion satisfies the condition of level III. The condition of level III is that the second condition must be satisfied in addition to the first condition. The second condition is that all of the following formula (2), formula (3), formula (4), and formula (5) are satisfied. That is, if the first condition is satisfied, and "the count number C15 of the area A15 is 1 or more, the count number C25 of the area A25 and the count number C28 of the area A28 are both 2 or more, and the total detection number CA is 11 or more", The degree of congestion is judged to be level III. C15≧1 (2) C25≧2 (3) C28≧2 (4) CA≧11 (5)

判定部71c於第二條件成立的情形,在步驟S50作出肯定判定而進展到步驟S60。判定部71c在步驟S50作出否定判定的情形,進展到步驟S70。When the second condition is satisfied, the determination unit 71c makes an affirmative determination in step S50 and proceeds to step S60. When the determination part 71c makes a negative determination in step S50, it progresses to step S70.

於步驟S60,判定部71c係對擁擠度的水平Lv,代入水平III的値。於步驟S60被執行時,流程圖結束。In step S60, the determination unit 71c substitutes the value of the level III for the level Lv of the degree of congestion. When step S60 is executed, the flow chart ends.

於步驟S70,判定部71c係對於擁擠度的水平Lv,代入水平II的値。於步驟S70被執行時,流程圖結束。In step S70, the determination unit 71c substitutes the value of the level II for the level Lv of the degree of congestion. When step S70 is executed, the flowchart ends.

於步驟S80,判定部71c係對於擁擠度的水平Lv代入水平I的値。步驟S80被執行時,流程圖結束。In step S80, the determination unit 71c substitutes the value of the level I for the level Lv of the degree of congestion. When step S80 is executed, the flowchart ends.

CPU71可將關於擁擠度的水平Lv的資訊發送至外部。例如,擁擠度的水平Lv,可向停靠站被無線發送。於此情形,停靠站的顯示器中顯示了從此時至到站的車輛100之擁擠度的水平Lv。例如,擁擠度的水平Lv可向使用者的行動終端被發送。於此情形,於使用者的行動終端,車輛100之擁擠度的水平Lv可藉由應用程式而顯示。例如,擁擠度的水平Lv可被發送至與車輛100行走相同路線的其它路線公車。擁擠度的水平Lv亦可使用於包含車輛100的路線公車的運行時間表的管理。The CPU 71 can transmit information on the congestion level Lv to the outside. For example, the congestion level Lv can be wirelessly transmitted to the bus stop. In this case, the level Lv of the degree of congestion of the vehicles 100 arriving at the station from that moment on is displayed on the display at the station. For example, the congestion level Lv can be sent to the user's mobile terminal. In this case, on the user's mobile terminal, the congestion level Lv of the vehicle 100 can be displayed by the application. For example, the congestion level Lv may be sent to other route buses traveling the same route as the vehicle 100 . The level Lv of the degree of congestion can also be used in the management of the operating schedule of the route bus including the vehicle 100 .

如以上説明,關於本實施形態的車輛之擁擠度判定方法包含取得步驟、產生步驟、及判定步驟。在取得步驟中,取得了拍攝到輸送旅客的車輛100之車內101的影像G1、G2。在產生步驟中,從影像G1、G2偵測人,並對各自偵測到的人產生包圍該人的矩形的偵測區域DA。在圖5的流程圖中,步驟S10相當於產生步驟。As described above, the vehicle congestion level judging method according to the present embodiment includes an acquiring step, a generating step, and a judging step. In the acquisition step, images G1 and G2 of the interior 101 of the vehicle 100 transporting passengers are acquired. In the generating step, a person is detected from the images G1 and G2, and a rectangular detection area DA surrounding the person is generated for each detected person. In the flowchart of FIG. 5, step S10 corresponds to a generating step.

在判定步驟中,基於影像G1、G2的對象區域Tg及偵測區域DA,車輛100之擁擠度被判定。在圖5的流程圖中,從步驟S30至步驟S80相當於判定步驟。對象區域Tg為影像G1、G2的一部分的區域,且為於車輛100的走道103中站立的旅客被拍攝的區域。在判定步驟中,基於與對象區域Tg重疊的偵測區域DA的數目,判定車輛100之擁擠度。In the determining step, the congestion degree of the vehicle 100 is determined based on the target area Tg and the detection area DA of the images G1 and G2. In the flowchart of FIG. 5 , steps S30 to S80 correspond to a determination step. The target area Tg is an area of a part of the images G1 and G2, and is an area where a passenger standing in the aisle 103 of the vehicle 100 is photographed. In the determination step, the degree of congestion of the vehicle 100 is determined based on the number of detection areas DA overlapping with the target area Tg.

與對象區域Tg重疊的偵測區域DA的數目表示在走道103站立的旅客的密集度。因此,關於本實施形態的車輛之擁擠度判定方法可適當地判定輸送旅客的車輛之擁擠度。又,藉由從廣泛影像G1、G2的全體偵測人,與從對象區域Tg偵測人的情形進行比較,可實現良好的偵測準確度。The number of detection areas DA overlapping with the target area Tg indicates the density of passengers standing on the aisle 103 . Therefore, the vehicle congestion degree judging method according to the present embodiment can appropriately judge the congestion degree of vehicles transporting passengers. In addition, good detection accuracy can be achieved by comparing the detection of persons from the entire wide range images G1 and G2 with the detection of persons from the target region Tg.

如以上車輛之擁擠度判定方法,例如,由近年的新型冠狀病毒(COVID-19)等之感染症對策的觀點來看,亦為有用。例如,可以促使利用者適時地且正確地把握車輛100之擁擠度而不稠密地利用,又,對經營者亦可使不稠密的方式活用運行計畫的改善。又,關於本實施形態的車輛之擁擠度判定方法中,基於靜止影像而擁擠度被判定。因此,可抑制關於判定的各部的負荷的增加。例如,將藉由攝相機3所拍攝到的影像發送至網際網路NW的情形之通訊量小的緣故,通訊負荷被減輕。又,於車載器2執行擁擠度的判定的情形,車載器2中的演算負荷被減輕。The method of judging the degree of congestion of vehicles as above is also useful, for example, from the viewpoint of countermeasures against infectious diseases such as the novel coronavirus (COVID-19) in recent years. For example, it is possible to prompt the user to grasp the degree of congestion of the vehicle 100 timely and accurately so as not to use it densely, and also to improve the operation plan for the operator to use in a less dense manner. Also, in the vehicle congestion degree determination method of this embodiment, the congestion degree is determined based on a still image. Therefore, it is possible to suppress an increase in the load on each section related to determination. For example, since the amount of communication in the case of sending an image captured by the camera 3 to the Internet NW is small, the communication load is reduced. Also, when the on-vehicle device 2 performs the determination of the degree of congestion, the calculation load on the on-vehicle device 2 is reduced.

本實施形態之對象區域Tg為於車輛100的上下車口附近站立的旅客被拍攝的區域。車輛100為擁擠的情形,認為許多旅客站在上下車口的附近。因此,關於本實施形態的車輛之擁擠度判定方法可適當地判定車輛100之擁擠度。The target area Tg of the present embodiment is an area where passengers standing near the entrance and exit of the vehicle 100 are photographed. The vehicle 100 is considered to be crowded, and it is considered that many passengers are standing near the boarding gate. Therefore, the congestion degree of the vehicle 100 can be appropriately determined in the method for judging the degree of congestion of the vehicle according to the present embodiment.

本實施形態之影像G2具有坐著的旅客被拍攝到的第一區域。於第一區域具有中心座標的偵測區域DA從與對象區域Tg重疊的偵測區域的計數數目C0的算出對象中排除。因此,關於本實施形態的車輛之擁擠度判定方法,可排除坐著的旅客而準確度良好地算出走道103的密集度。The image G2 of this embodiment has the first area where the sitting passengers are photographed. The detection area DA having the center coordinates in the first area is excluded from the calculation target of the count number C0 of the detection area overlapping with the target area Tg. Therefore, with regard to the method for judging the degree of congestion of vehicles in this embodiment, the density of the aisle 103 can be accurately calculated by excluding passengers sitting.

在關於本實施形態的車輛之擁擠度判定方法中,於判定步驟,基於與對象區域Tg重疊的偵測區域DA的計數數目與從影像G2偵測的人的總偵測數CA,判定車輛100之擁擠度。因此,基於走道103的密集度和總偵測數CA,可適當地判定車輛100之擁擠度。In the method for judging the degree of congestion of a vehicle according to the present embodiment, in the judging step, the vehicle 100 is judged based on the counted number of detection areas DA overlapping with the target area Tg and the total number of detections CA of people detected from the image G2. the degree of congestion. Therefore, based on the density of the walkways 103 and the total number of detections CA, the degree of congestion of the vehicles 100 can be properly determined.

在關於本實施形態的車輛之擁擠度判定方法中,於取得步驟,第一影像及第二影像被取得。第一影像對應拍攝走道103中的下車口105的附近站立的旅客的影像G1。第二影像對應拍攝走道103中的上車口107的附近站立的旅客的影像G2。於判定步驟,基於第一影像及第二影像,判定車輛100之擁擠度。基於二個影像,準確度更佳地判定車輛100之擁擠度成為可能。In the method for judging the degree of congestion of a vehicle according to the present embodiment, in the obtaining step, the first image and the second image are obtained. The first image corresponds to an image G1 of a passenger standing near the exit gate 105 in the aisle 103 . The second image corresponds to the image G2 of passengers standing near the boarding gate 107 in the aisle 103 . In the determination step, the degree of congestion of the vehicle 100 is determined based on the first image and the second image. Based on the two images, it becomes possible to determine the degree of congestion of the vehicle 100 with better accuracy.

關於本實施形態的車輛之擁擠度判定系統1具有攝相機3、偵測部71a、及判定部71c。判定部71c根據與對象區域Tg重疊的偵測區域DA的數目而判定車輛100之擁擠度。因此,擁擠度判定系統1可適當地判定輸送旅客的車輛之擁擠度。The vehicle congestion degree judging system 1 of the present embodiment includes a camera 3, a detection unit 71a, and a judging unit 71c. The determination unit 71c determines the degree of congestion of the vehicle 100 based on the number of detection areas DA overlapping with the target area Tg. Therefore, the congestion degree judging system 1 can appropriately judge the congestion degree of vehicles transporting passengers.

關於本實施形態的車輛之擁擠度判定方法及擁擠度判定系統1可準確度佳地判定車輛100之擁擠度。例如,藉由不僅以對象區域Tg為對象而且以影像G1、G2的全體為對象來偵測人,可抑制漏偵測及錯誤偵測。作為比較例,設定攝相機3的視角為確定僅拍攝對象區域Tg。於此情形,當旅客密集時,因多位人重疊拍攝,變得難以偵測到後隱藏在後面的人。相對於此,在本實施形態中,確定攝相機3的視角成為拍攝較對象區域Tg更廣範圍。據此,難以發生人的漏偵測及錯誤偵測。The vehicle congestion level judging method and the congestion level judging system 1 according to the present embodiment can accurately judge the congestion level of the vehicle 100 . For example, by detecting a person not only in the target area Tg but also in the entirety of the images G1 and G2 , it is possible to suppress missed detection and false detection. As a comparative example, the angle of view of the camera 3 is set to determine only the imaging target area Tg. In this case, when there are many tourists, it becomes difficult to detect the people who are hiding behind because of the overlap of multiple people. On the other hand, in the present embodiment, the angle of view of the camera 3 is determined to capture a wider range than the target area Tg. Accordingly, it is difficult for human omission and false detection to occur.

本實施形態之的攝相機3被配置成俯瞰車內101而撮影。據此,即使於走道103有旅客密集,偵測個別旅客為容易。The camera 3 of the present embodiment is arranged to take pictures overlooking the interior of the vehicle 101 . Accordingly, even if there are many passengers in the aisle 103, it is easy to detect individual passengers.

又,攝相機3的視角被設定為不僅拍攝走道103站立的旅客,而且亦拍攝坐在座位108、109的旅客。因此,基於密集度及總偵測數兩者,高準確度之擁擠度判定成為可能。In addition, the angle of view of the camera 3 is set to photograph not only passengers standing in the aisle 103 but also passengers sitting in the seats 108 and 109 . Therefore, based on both the density and the total number of detections, it is possible to determine the degree of congestion with high accuracy.

依據關於本實施形態的車輛之擁擠度判定系統1及車輛之擁擠度判定方法,可實現車輛100之擁擠緩和。例如,若將與車輛100之擁擠度有關資訊即時提供給使用者,旅客不易向特定車輛100集中。其結果,事前防止車輛100中的旅客的密集,可抑制車內的病毒感染。According to the system 1 for judging the degree of congestion of vehicles and the method for judging the degree of congestion of vehicles related to the present embodiment, it is possible to alleviate the congestion of the vehicle 100 . For example, if information related to the degree of congestion of vehicles 100 is provided to users in real time, it is difficult for passengers to concentrate on a specific vehicle 100 . As a result, crowding of passengers in the vehicle 100 can be prevented in advance, and virus infection in the vehicle can be suppressed.

又,判定部71c可將複數個區域一起設定成一個對象區域Tg。例如,判定部71c可於影像G2,將區域A25和區域A28一起設為一個對象區域Tg。於此情形,對象區域Tg和偵測區域DA重疊的區域的像素數為閾値Th3以上的情形,對於區域A25、A28的計數數目增加。In addition, the determination unit 71c may set a plurality of areas collectively as one target area Tg. For example, the determination unit 71c may set the area A25 and the area A28 together as one target area Tg in the image G2. In this case, when the number of pixels in the area where the target area Tg and the detection area DA overlap is equal to or greater than the threshold Th3, the count numbers for the areas A25 and A28 are increased.

本實施形態之擁擠度判定系統1容易適用於安裝的車輛中。例如,攝相機3的位置及視角,有因應車輛100的車種及運營公司而不同的情形。本實施形態之擁擠度判定系統1可藉由從影像G1、G2具有的複數區域選擇最適對象區域Tg,而容易地調節密集度的判定準確度。又,可依閾値Th1、Th2而容易地調節密集度的判定準確度。又,藉由設定設為駕駛員的排除區域、坐在座位上的旅客的排除區域,可容易地調節密集度的判定準確度。如此,本實施形態之擁擠度判定系統1為具有高通用性的系統。The congestion level judging system 1 of the present embodiment is easily applicable to the vehicle to be installed. For example, the position and angle of view of the camera 3 may vary depending on the type of vehicle 100 and the operating company. The congestion degree judging system 1 of the present embodiment can easily adjust the density judgment accuracy by selecting the optimum target region Tg from plural regions included in the images G1 and G2. In addition, the determination accuracy of density can be easily adjusted according to the thresholds Th1 and Th2. In addition, by setting an exclusion area for the driver and an exclusion area for passengers sitting on the seat, it is possible to easily adjust the determination accuracy of the density. Thus, the congestion degree determination system 1 of this embodiment is a highly versatile system.

又,影像G1、G2中的區域數並未限定於例示的數目。又,影像G1、G2中的區域的形狀並未限定於例示的形狀。影像G1、G2可未被相等分割。In addition, the number of areas in the video images G1 and G2 is not limited to the illustrated number. In addition, the shapes of the regions in the video images G1 and G2 are not limited to the illustrated shapes. The images G1, G2 may not be equally divided.

CPU71可基於影像G1、G2之任一者而判定擁擠度。例如,CPU71基於影像G1、G2之一者,可判定水平I、水平II、及水平III之任一者。The CPU 71 can determine the degree of congestion based on any one of the images G1 and G2. For example, the CPU 71 can determine any one of Level I, Level II, and Level III based on one of the images G1 and G2.

偵測部71a的動作、抽出部71b的動作、及判定部71c的動作中至少一者之動作,可於車載器2或雲端上的伺服器6中執行。例如,車載器2之CPU21可具有偵測部71a、抽出部71b、及判定部71c。於此情形,CPU21可將與擁擠度的水平Lv有關的資訊發送至網際網路NW。At least one of the operation of the detection unit 71a, the operation of the extraction unit 71b, and the operation of the determination unit 71c can be executed in the on-vehicle device 2 or the server 6 on the cloud. For example, the CPU 21 of the vehicle-mounted device 2 may have a detection unit 71a, an extraction unit 71b, and a determination unit 71c. In this case, the CPU 21 may transmit information on the congestion level Lv to the Internet NW.

上述實施形態所揭示的內容,可適當組合而執行。The contents disclosed in the above-mentioned embodiments can be implemented in combination as appropriate.

1:車輛之擁擠度判定系統 2:車載器 3:攝相機 4:門感測器 5:無線基地台 6:伺服器 7:工作站PC 21:CPU 22:記憶體 23:GPS接收部 24:通訊部 31:第一攝相機 32:第二攝相機 71:CPU 71a:偵測部 71b:抽出部 71c:判定部 72:記憶體 73:通訊部 74:外部輸入介面 100:車輛 101:車內 102:駕駛座 103:走道 104:下車門 105:下車口 106:上車門 107:上車口 108:後部座位 109:前部座位 A11~A19:區域 A21~A29:區域 C0:對象區域的計數數目 C15,C25,C28:區域的計數數目 CA:總偵測數 DA0~DA5:偵測區域 G1,G2:影像 NW:網際網路 P0~P5:人 S10~S80:步驟 1: vehicle congestion degree judgment system 2: On-board device 3: camera 4: Door sensor 5: Wireless Base Station 6: Server 7: Workstation PC 21:CPU 22: Memory 23:GPS receiving department 24: Department of Communications 31: First camera 32: Second camera 71:CPU 71a: Detection department 71b: extraction part 71c: Judgment Department 72: memory 73: Department of Communications 74: External input interface 100:vehicle 101: Inside the car 102: Driver's seat 103: aisle 104: get off the door 105: Get off the gate 106: on the door 107: boarding gate 108: rear seat 109: Front seat A11~A19: area A21~A29: area C0: Count number of target area C15, C25, C28: the number of counts in the area CA: total detections DA0~DA5: detection area G1,G2: Image NW: Internet P0~P5: People S10~S80: steps

圖1為關於實施形態的車輛之擁擠度判定系統的方塊圖。 圖2為顯示藉由第一攝相機所拍攝到的影像的圖。 圖3為顯示藉由第二攝相機所拍攝到的影像的圖。 圖4為針對重疊度的判定進行説明的圖。 圖5為顯示實施形態之動作的流程圖。 FIG. 1 is a block diagram of a vehicle congestion degree judging system according to an embodiment. FIG. 2 is a diagram showing images captured by the first camera. FIG. 3 is a diagram showing images captured by a second camera. FIG. 4 is a diagram illustrating the determination of the degree of overlap. Fig. 5 is a flowchart showing the operation of the embodiment.

none

101:車內 101: Inside the car

103:走道 103: aisle

106:上車門 106: on the door

107:上車口 107: boarding gate

108:後部座位 108: rear seat

109:前部座位 109: Front seat

A21:區域 A21: Area

A22:區域 A22: Area

A23:區域 A23: Area

A24:區域 A24: Area

A25(Tg):區域 A25(Tg): area

A26:區域 A26: Area

A27:區域 A27: Area

A28(Tg):區域 A28(Tg): area

A29:區域 A29: Area

DA3:偵測區域 DA3: Detection area

DA4:偵測區域 DA4: Detection area

DA5:偵測區域 DA5: Detection area

G2:影像 G2: Image

P3:人 P3: people

P4:人 P4: people

P5:人 P5: people

Claims (6)

一種車輛之擁擠度判定方法,其特徵為包含: 取得拍攝到的輸送旅客的車輛之車內影像的步驟、 從前述影像偵測人並對於每個偵測到的前述人產生包圍該人的矩形的偵測區域的步驟、 基於前述影像的對象區域及前述偵測區域而判定前述車輛的擁擠度的步驟, 其中前述對象區域為前述影像的一部分的區域,且為站立於前述車輛的走道的旅客被拍攝到的區域, 於前述判定步驟,基於與前述對象區域重疊的前述偵測區域的數量,而判定前述車輛的擁擠度。 A method for judging the degree of congestion of a vehicle, characterized in that it comprises: Steps of obtaining the captured image of the interior of the vehicle transporting passengers, a step of detecting a person from the aforementioned image and generating, for each detected aforementioned person, a rectangular detection area enclosing the person, a step of determining the degree of congestion of the aforementioned vehicle based on the target area of the aforementioned image and the aforementioned detection area, wherein the aforementioned target area is an area of a part of the aforementioned image, and is an area where passengers standing on the aisle of the aforementioned vehicle are photographed, In the aforementioned determining step, the degree of congestion of the aforementioned vehicles is determined based on the number of the aforementioned detection areas overlapping with the aforementioned target area. 如請求項1所述之車輛之擁擠度判定方法,其中前述對象區域為前述車輛的上下車口的附近站立的旅客被拍攝的區域。The method for judging the degree of congestion of a vehicle according to claim 1, wherein the target area is an area in which passengers standing near the entrance and exit of the vehicle are photographed. 如請求項1或2所述之車輛之擁擠度判定方法,其中前述影像具有拍攝到坐著的旅客的第一區域, 於前述第一區域具有中心座標的前述偵測區域被排除於與前述對象區域重疊的前述偵測區域的數目的算出對象外。 The method for judging the degree of congestion of a vehicle as described in claim 1 or 2, wherein the aforementioned image has a first area in which a sitting passenger is photographed, The detection area having the center coordinates in the first area is excluded from the calculation object of the number of the detection areas overlapping with the target area. 如請求項1至3中任一項所述之車輛之擁擠度判定方法,其中於前述判定步驟,基於與前述對象區域重疊的前述偵測區域的數和從前述影像偵測到的前述人的總偵測數,判定前述車輛之擁擠度。The method for judging the degree of congestion of vehicles according to any one of claims 1 to 3, wherein in the judging step, based on the number of the aforementioned detection areas overlapping with the aforementioned target area and the number of the aforementioned people detected from the aforementioned images The total number of detections is used to determine the degree of congestion of the aforementioned vehicles. 如請求項1至4中任一項所述之車輛之擁擠度判定方法, 其中於前述取得步驟,取得拍攝於前述走道中的下車口的附近站立的旅客的第一影像、及拍攝於前述走道中的上車口的附近站立的旅客的第二影像, 於前述判定步驟,基於前述第一影像及前述第二影像,而判定前述車輛之擁擠度。 The vehicle congestion degree determination method described in any one of Claims 1 to 4, Wherein in the aforementioned obtaining step, the first image of the passenger standing near the exit gate in the aforesaid aisle and the second image of the passenger standing near the abort gate in the aforesaid aisle are obtained, In the aforementioned determining step, based on the aforementioned first image and the aforementioned second image, the degree of congestion of the aforementioned vehicles is determined. 一種車輛之擁擠度判定系統,其具備: 拍攝輸送旅客的車輛的車內,並將影像輸出的攝相機, 從前述影像偵測人,並對各自偵測到的前述人產生包圍該人的矩形的偵測區域的偵測部, 基於前述影像之對象區域及前述偵測區域,而判定前述車輛之擁擠度的判定部; 前述對象區域為前述影像之一部分的區域,且為站立於前述車輛的走道的旅客被拍攝到的區域, 前述判定部基於與前述對象區域重疊的前述偵測區域的數量,而判定前述車輛的擁擠度。 A system for judging the congestion degree of a vehicle, which has: A camera that takes pictures of the inside of a vehicle transporting passengers and outputs the images, a detecting unit that detects a person from the image and generates a rectangular detection area surrounding the person for each detected person, A judging unit for judging the degree of congestion of the aforementioned vehicles based on the target area of the aforementioned image and the aforementioned detection area; The aforementioned target area is an area of a part of the aforementioned image, and is an area where passengers standing on the aisle of the aforementioned vehicle are photographed, The determining unit determines the degree of congestion of the vehicles based on the number of the detection areas overlapping with the target area.
TW111111019A 2021-04-06 2022-03-24 Vehicle congestion degree determination method, and vehicle congestion degree determination system TWI781069B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021064498A JP7305698B2 (en) 2021-04-06 2021-04-06 Vehicle congestion determination method and vehicle congestion determination system
JP2021-064498 2021-04-06

Publications (2)

Publication Number Publication Date
TWI781069B TWI781069B (en) 2022-10-11
TW202241114A true TW202241114A (en) 2022-10-16

Family

ID=83545842

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111111019A TWI781069B (en) 2021-04-06 2022-03-24 Vehicle congestion degree determination method, and vehicle congestion degree determination system

Country Status (4)

Country Link
JP (1) JP7305698B2 (en)
MX (1) MX2023010408A (en)
TW (1) TWI781069B (en)
WO (1) WO2022215394A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010003110A (en) * 2008-06-20 2010-01-07 Panasonic Corp On-vehicle moving image data recording device
DE102009039162A1 (en) 2009-08-27 2011-03-17 Knorr-Bremse Gmbh Monitoring device and method for monitoring an entry or exit area from an access opening of a vehicle to a building part
JP6252349B2 (en) 2014-05-14 2017-12-27 富士通株式会社 Monitoring device, monitoring method and monitoring program
CN111079474A (en) * 2018-10-19 2020-04-28 上海商汤智能科技有限公司 Passenger state analysis method and device, vehicle, electronic device, and storage medium
US20200273345A1 (en) 2019-02-26 2020-08-27 Aptiv Technologies Limited Transportation system and method
JP2021003972A (en) 2019-06-26 2021-01-14 株式会社東芝 Information processor, station management system, station management equipment and program
CN111079696B (en) 2019-12-30 2023-03-24 深圳市昊岳电子有限公司 Detection method based on vehicle monitoring personnel crowding degree

Also Published As

Publication number Publication date
WO2022215394A1 (en) 2022-10-13
MX2023010408A (en) 2023-09-18
JP2022160020A (en) 2022-10-19
TWI781069B (en) 2022-10-11
JP7305698B2 (en) 2023-07-10

Similar Documents

Publication Publication Date Title
JP6474687B2 (en) Elevator with image recognition function
US9569902B2 (en) Passenger counter
CN105390021B (en) The detection method and device of parking space state
JP5030983B2 (en) Train stop detection system and train moving speed and position detection system
CN104724566B (en) Elevator having image recognition function
WO2013088620A1 (en) Electronic device
JP4845580B2 (en) Train congestion notification system
CN106541968B (en) The recognition methods of the subway carriage real-time prompt system of view-based access control model analysis
JP5832749B2 (en) In-train monitoring system and in-train monitoring method
CN111460938A (en) Vehicle driving behavior real-time monitoring method and device
JP2013025523A (en) Monitoring system and congestion rate calculation method
JP2015000807A (en) Elevator control system and elevator control method
CN105404856B (en) A kind of public transit vehicle seat occupancy states detection method
CN108629230A (en) A kind of demographic method and device and elevator scheduling method and system
KR101737738B1 (en) System for analysing and transmission of subway image information
JP4056813B2 (en) Obstacle detection device
CN110992678A (en) Bus passenger flow statistical method based on big data face recognition
TWI781069B (en) Vehicle congestion degree determination method, and vehicle congestion degree determination system
JP7186749B2 (en) Management system, management method, management device, program and communication terminal
KR20150067018A (en) Method and service server for providing crowd density information
JP4689818B2 (en) Ride status determination device
JP7347787B2 (en) Crowd situation management device, congestion situation management system, congestion situation management method and program
RU121628U1 (en) INTELLIGENT PASSENGER FLOW ANALYSIS SYSTEM USING TECHNICAL VISION
TWI616849B (en) Intelligent bus system and its implementing method
TWI813198B (en) The system for counting the number of people getting on and off the bus, the method for counting the number of people getting on and off the bus, and the program for counting the number of people getting on and off the bus

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent