WO2018100675A1 - Congestion state predicting system and method, and program - Google Patents

Congestion state predicting system and method, and program Download PDF

Info

Publication number
WO2018100675A1
WO2018100675A1 PCT/JP2016/085562 JP2016085562W WO2018100675A1 WO 2018100675 A1 WO2018100675 A1 WO 2018100675A1 JP 2016085562 W JP2016085562 W JP 2016085562W WO 2018100675 A1 WO2018100675 A1 WO 2018100675A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
congestion
prediction
congestion situation
image analysis
Prior art date
Application number
PCT/JP2016/085562
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2016/085562 priority Critical patent/WO2018100675A1/en
Publication of WO2018100675A1 publication Critical patent/WO2018100675A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Definitions

  • the present invention relates to a congestion situation prediction system, method and program.
  • the traffic situation determination means inputs traffic flow image information, estimates the amount of movement based on the area ratio of the vehicle movement area based on the difference between the current flow in the target area and the traffic flow image information after a certain minute time, and the neural network
  • a traffic situation monitoring device has been proposed that outputs the recall level of each language value (such as “traffic jam” and “slight traffic jam”) that represents the stage of the situation, and the traffic situation judgment means determines the language value that maximizes the recall level. (See Patent Document 1).
  • an object of the present invention is to provide a congestion situation prediction system, method, and program capable of improving the accuracy of prediction of a congestion situation after a predetermined time.
  • the present invention provides the following solutions.
  • the invention according to the first feature is First image analysis means for analyzing the first image captured by the camera; Learning means for learning from the result of image analysis of the first image the congestion status shown in the first image; Second image acquisition means for acquiring a second image captured by the camera; Image analysis of the acquired second image and second image analysis means for acquiring a congestion situation; Prediction means for predicting the congestion situation after a predetermined time of the second image from the congestion situation of the learned first image and the congestion situation of the second image analyzed by the image.
  • a congestion situation prediction system is provided.
  • the congestion situation prediction system includes a first image analysis unit, a learning unit, a second image acquisition unit, a second image analysis unit, an acquisition unit, a prediction unit, Is provided.
  • the first image analysis means performs image analysis on the first image captured by the camera.
  • the learning means learns the congestion situation shown in the first image from the result of image analysis of the first image.
  • the second image acquisition unit acquires a second image captured by the camera.
  • the second image analysis means performs image analysis on the acquired second image and acquires a congestion state.
  • the acquisition unit acquires the congestion state indicated in the second image from the result of image analysis of the second image.
  • the predicting means predicts the congestion status of the second image after a predetermined time from the learned congestion status of the first image and the analyzed congestion status of the second image.
  • the congestion situation after a predetermined time can be predicted not only from the congestion situation of the first image learned from the image analysis result of the first image but also from the congestion situation of the second image analyzed by the image analysis.
  • the mixed situation may differ depending on the weather or the like.
  • this prediction may lack accuracy. Therefore, for example, in addition to the congestion situation of the first image learned from the result of the image analysis of the first image last year, the prediction including the congestion situation by the image analysis of the second image acquired in the near future makes it possible to accurately predict the prediction. Improves.
  • the invention according to the first feature is a category of the system, but the method and the program exhibit the same operations and effects.
  • the invention according to the second feature is in addition to the invention according to the first feature,
  • the first image analysis means analyzes the number of cars or persons as the first image
  • the second image analysis means performs image analysis on a congestion situation of a car or a person as the second image
  • the prediction means provides a congestion situation prediction system that predicts a congestion situation of a car or a person after a predetermined time from the image analysis by the second image analysis means.
  • the invention according to the third feature is in addition to the invention according to the second feature,
  • the prediction means provides a congestion situation prediction system that performs prediction by referring to the number of cars or persons one year before the date and time to be predicted.
  • the third aspect of the invention for example, it is possible to improve the accuracy of prediction of the congestion situation of a car or a person after a predetermined time on a specific day (for example, January 1). .
  • the present invention it is possible to provide a congestion situation prediction system, method and program capable of improving the accuracy of prediction of the congestion situation after a predetermined time.
  • FIG. 1 is a diagram for explaining an outline of a congestion situation prediction system 1 which is a preferred embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the relationship between the functional blocks of the congestion state prediction server 10 and the functions in the congestion state prediction system 1.
  • FIG. 3 is a flowchart of the congestion situation prediction process executed by the congestion situation prediction server 10.
  • FIG. 4 is a diagram for explaining the congestion situation prediction data 200 stored in the storage unit 12.
  • FIG. 5 is a diagram for explaining an example of the congestion situation prediction process executed by the congestion situation prediction server 10.
  • FIG. 6 is a diagram for explaining an example of the congestion situation prediction process executed by the congestion situation prediction server 10.
  • FIG. 7 is a diagram for explaining an example of the congestion situation prediction process executed by the congestion situation prediction server 10.
  • FIG. 1 is a diagram for explaining an outline of a congestion situation prediction system 1 which is a preferred embodiment of the present invention. Based on this FIG. 1, the outline
  • the congestion situation prediction system 1 performs image analysis on an image (first image) captured by a camera in the past (for example, one year ago), learned learning data, and an image (second image) currently captured by a camera. ) Based on the image analysis data, for example, a system for predicting the congestion situation of a car or a person several hours after the present.
  • the congestion status prediction system 1 performs image analysis on the number of cars or persons at each time from a plurality of first images captured at a minute time interval with a camera at a predetermined location, for example, one year ago.
  • the “predetermined place” is, for example, a tourist spot that is likely to be crowded, or an arbitrary place that is likely to cause traffic congestion such as a junction on an expressway.
  • the congestion situation prediction system 1 learns the congestion situation shown in the first image from the result of the image analysis of the first image, and the vehicle or the vehicle at a predetermined time interval (15 minutes interval in the example shown in FIG. 1) Congestion situation prediction data for predicting the transition of the number of persons is created.
  • the congestion status prediction system 1 analyzes the number of cars or persons from a second image currently captured at a minute time interval by a camera at a predetermined location, and indicates the current congestion status at the predetermined location. Create congestion analysis data.
  • the congestion situation prediction system 1 determines from the time when the second image is captured from the congestion situation prediction learning data created by learning from the first image and the congestion situation analysis data created by analyzing the second image.
  • the congestion situation after time (1 hour in the example shown in FIG. 1) is predicted.
  • congestion situation prediction system 1 not only from the congestion situation (congestion situation prediction learning data) of the first image learned from the result of image analysis of the first image, but also of the second image subjected to image analysis.
  • the congestion situation after a predetermined time can be predicted based on the congestion situation (congestion situation analysis data). For example, in the last year and this year, even in the same date and time, the mixed situation may differ depending on the weather or the like. In such a case, when the congestion situation of this year is predicted only by the congestion situation of the first image learned from the result of image analysis of the first image of last year, this prediction may lack accuracy.
  • the congestion status (congestion status prediction learning data) of the first image learned from the result of image analysis of the first image last year improves the accuracy of the prediction. Therefore, it is possible to improve the accuracy of prediction of the congestion situation after a predetermined time.
  • FIG. 2 is a diagram illustrating the relationship between the functional blocks of the congestion state prediction server 10 and the functions in the congestion state prediction system 1.
  • the congestion status prediction system 1 is connected to a congestion status prediction server 10 that creates congestion status prediction data that predicts the congestion status, and the congestion status prediction server 10 via a network.
  • Receiving terminal 20 receives the congestion status prediction data from the congestion state prediction server 10 via a network.
  • the congestion status prediction server 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like as the control unit 11, and a storage unit 12 that stores data using a hard disk or a semiconductor memory.
  • the communication unit 13 includes, for example, a WiFi (Wireless Fidelity) compatible device (may be wired) that conforms to IEEE 802.11.
  • the storage unit 12 stores the control program 100, congestion situation prediction data 200, other data necessary for congestion situation prediction (for example, weather data) and data necessary for control of the congestion situation prediction server 10.
  • the congestion status prediction server 10 includes a CCD camera or the like as the imaging unit 14.
  • the congestion status prediction server 10 may not include the imaging unit 14 and may receive image data captured by an imaging device such as a camera provided outside.
  • the control unit 11 reads the control program 100, so that the first image acquisition module 101 and the second image acquisition module 104 are operated in cooperation with the storage unit 12, the communication unit 13, and the imaging unit 14. Realize. Further, in the congestion status prediction server 10, the control unit 11 reads the control program 100, thereby realizing the first image analysis module 102, the learning module 103, and the second image analysis module 105 in cooperation with the storage unit 12. . Further, in the congestion state prediction server 10, the control unit 11 reads the control program 100, thereby realizing the prediction module 106 in cooperation with the storage unit 12 and the communication unit 13.
  • the terminal 20 may be a general information terminal, requests the congestion status prediction data from the congestion status prediction server 10, receives the congestion status prediction data from the congestion status prediction server 10, and is congested.
  • An information device capable of displaying a predicted congestion state based on the situation prediction data for example, a smartphone, a notebook PC, a car navigation system, a netbook terminal, a slate terminal, an electronic book terminal, an electronic dictionary terminal, a portable type It may be a portable terminal such as a music player or a portable content playback / recording player.
  • FIG. 3 is a flowchart of the congestion situation prediction process executed by the congestion situation prediction server 10. The congestion status prediction process performed by the various modules of the congestion status prediction server 10 described above will be described.
  • step S ⁇ b> 1 the first image acquisition module 101 acquires a first image captured at a minute interval by the imaging unit 14 and stores the first image in the storage unit 12. Specifically, the first image acquisition module 101 adds data necessary for predicting the congestion state such as date, time, day of the week, weather, and the like at the position where the first image is captured (for example, latitude / longitude). The data is stored in the storage unit 12 in association with each other.
  • the first image analysis module 102 performs image analysis on the first image stored in the storage unit 12 by the first image acquisition module 101 in step S1. Specifically, the first image analysis module 102 performs image analysis on the first image and detects the number of cars or persons photographed in the first image. Further, the first image analysis module 102 determines the moving speed of the car or person based on the detected displacement of the car or person between the plurality of first images captured at a minute interval and the time interval between the plurality of first images. calculate. Further, such moving speeds are sequentially calculated for a predetermined time (for example, 15 minutes), and the average moving average speed is calculated.
  • a predetermined time for example, 15 minutes
  • step S3 the learning module 103 learns the congestion status shown in the first image from the result of image analysis of the first image by the first image analysis module 102 in step S2, and changes the number of cars or persons.
  • Congestion status prediction data 200 for prediction is created and stored in the storage unit 12.
  • the learning module 103 learns by using the date, time, day of the week, and weather as feature quantities, and derives the tendency of the increase or decrease in the number of cars or persons and the rate of increase in a certain time zone of a certain date.
  • the learning process by the learning module 103 may simply be a process of deriving an approximate line or approximate curve (least square method) from the date and time when the first image was captured and the number of cars or people.
  • the approximate straight line or the approximate curve is the congestion condition prediction reference data.
  • This learning process may be so-called machine learning.
  • a nearest neighbor method, a naive Bayes method, a decision tree, or a support vector machine may be used.
  • deep learning may be used in which a characteristic amount for learning is generated by using a neural network.
  • the date and time when the first image in the past was captured and the number of cars or persons may be learned as supervised data. For example, in the case of the nearest neighbor method or the k-nearest neighbor method, as the past example, the number of cars or persons at the date and time when the first image was captured is arranged in the feature space, and data to be newly determined is given. At this time, the past class (one or k) of the closest distance in the feature space (date and time when the congestion situation is to be predicted) is set as the prediction result.
  • FIG. 4 is a diagram for explaining the congestion situation prediction data 200 stored in the storage unit 12.
  • the congestion status prediction data 200 includes, as a data ID, the date and time when the reference first image was captured, the weather at the date and time, and the number of recognition targets (results of image analysis performed by the first image analysis module 102). In the example illustrated in FIG. 4, the number of vehicles) and the moving average speed are associated with the increase / decrease tendency and the increase rate derived by the learning module 103.
  • Such congestion state prediction data 200 is generated for each position where the first image is captured and stored in the storage unit 12.
  • step S4 the second image acquisition module 104 acquires the second image captured by the imaging unit 14 at a minute interval and stores the second image in the storage unit 12. Specifically, the second image acquisition module 104 adds data necessary for predicting the congestion state such as date, time, day of the week, weather, and the like at the position (for example, latitude / longitude) at which the second image was captured. The data is stored in the storage unit 12 in association with each other.
  • the second image analysis module 105 performs image analysis on the second image stored in the storage unit 12 by the second image acquisition module 104 in step S4. Specifically, the second image analysis module 105 analyzes the second image and detects the number of cars or persons photographed in the second image. In addition, the second image analysis module 105 determines the moving speed of the car or person from the detected displacement of the car or person between the plurality of second images taken at a minute interval and the time interval between the plurality of second images. calculate. Further, such moving speeds are sequentially calculated for a predetermined time (for example, 15 minutes), and a moving average speed obtained by averaging these is calculated.
  • a predetermined time for example, 15 minutes
  • step S6 the prediction module 106 determines from the congestion status of the first image learned by the learning module 103 in step S5 and the congestion status of the second image analyzed by the second image analysis module 105 in step S5.
  • a congestion situation is predicted after a predetermined time (for example, after 15 minutes) from the time when the second image is captured. For example, the prediction module 106 predicts the congestion situation with reference to the number of cars or persons one year before the predicted date and time.
  • the prediction module 106 transmits congestion state prediction data indicating the prediction of the congestion state at a predetermined location to the terminal 20.
  • 5 to 7 are diagrams for explaining an example of the congestion state prediction process executed by the congestion state prediction server 10.
  • the first image analysis module 102 performs image analysis of the first image captured about 15 minutes after the first image shown in FIG. 5 and the first image shown in FIG.
  • the contour of the object (car in the example shown in FIGS. 5 and 6) is extracted based on the luminance difference in the range surrounded by the broken lines shown in FIGS. 5 and 6, and the number of objects photographed in the first image is detected ( 5 is stored in the congestion state prediction data 200 of the storage unit 12.
  • the first image analysis module 102 uses a first image that is stored in the congestion state prediction data 200 (provided with a data ID) as a reference, and has a very small interval before and after the imaging time of the reference first image.
  • the moving speed of the extracted contour is calculated from the first image picked up in (1).
  • the first image analysis module 102 sequentially calculates such a moving speed for a predetermined time (for example, 15 minutes), calculates a moving average speed obtained by averaging these, and the congestion state prediction data 200 of the storage unit 12. To remember.
  • the learning module 103 learns from the date, time, day of the week, weather, the number of recognition targets and the moving average speed stored in the congestion situation prediction data 200, derives the increase / decrease tendency and the increase rate for each data ID, 12 congestion state prediction data 200 are stored.
  • the second image analysis module 105 performs image analysis on the second image shown in FIG. 7 by the same processing as the first image analysis module 102, and extracts the contour of the object (the car in the example shown in FIG. 7). Then, the number of objects photographed in the second image is detected (four in the example shown in FIG. 7), and the moving speed and moving average speed are calculated.
  • the prediction module 106 determines the number of objects detected by the second image analysis module 105 based on the second image captured at a predetermined date and time (2016/8/14 21:00:15 in the example shown in FIG. 7).
  • the moving speed is created based on the first image (the first image shown in FIGS. 5 and 6) stored in the congestion state prediction data 200 and approximately one year before the predetermined date and time when the second image was captured. For example, the number of objects and the moving speed (congestion status) after a predetermined time are predicted from a predetermined date and time when the second image is captured.
  • the means and functions described above are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program.
  • the program is provided in a form recorded on a computer-readable recording medium such as a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD-RAM, etc.), for example.
  • the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to the computer.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

[Problem] To provide a congestion state predicting system that can improve the accuracy of a prediction of the state of congestion at a given later time. [Solution] A congestion state predicting system 1 is equipped with: a first image analysis module 102 for performing image analysis of a first image captured by a camera; a learning module 103 for learning the state of congestion shown in the first image from the image analysis results for the first image; a second image acquisition module 104 for acquiring a second image captured by a camera; a second image analysis module 105 for performing image analysis of the acquired second image to acquire the state of congestion therein; and a prediction module 106 for predicting the state of congestion at a given time period after the second image from the learned state of congestion in the first image and the analyzed state of congestion in the second image.

Description

混雑状況予測システム、方法及びプログラムCongestion situation prediction system, method and program
 本発明は、混雑状況予測システム、方法及びプログラムに関する。 The present invention relates to a congestion situation prediction system, method and program.
 従来、車又は人物の混雑状況を、類似する過去のデータを機械学習し、機械学習により出力されたデータを用いて予測していた。 Conventionally, the congestion situation of a car or a person has been predicted by machine learning of similar past data and using data output by machine learning.
 例えば、交通状況判定手段が交通流画像情報を入力し、対象エリアにおける現時点と一定微小時間後の交通流画像情報の差分による車両移動領域の面積比率による移動量をそれぞれ推定し、ニューラルネットワークが交通状況の段階を表す各言語値(「渋滞」,「やや渋滞」など)の想起度を出力し、交通状況判断手段が想起度が最大になる言語値を判定する交通状況監視装置が提案されている(特許文献1参照)。 For example, the traffic situation determination means inputs traffic flow image information, estimates the amount of movement based on the area ratio of the vehicle movement area based on the difference between the current flow in the target area and the traffic flow image information after a certain minute time, and the neural network A traffic situation monitoring device has been proposed that outputs the recall level of each language value (such as “traffic jam” and “slight traffic jam”) that represents the stage of the situation, and the traffic situation judgment means determines the language value that maximizes the recall level. (See Patent Document 1).
特開平6-4795号公報JP-A-6-4795
 しかしながら、特許文献1に示された技術では、移動量を推定するための交通流画像情報に含まれていない要因が、所定時間経過後に存在した場合、言語値の判定が現実とは異なってしまい、混雑状況が正確に予測できないことがあった。 However, in the technique disclosed in Patent Document 1, when a factor that is not included in the traffic flow image information for estimating the movement amount exists after a predetermined time has elapsed, the determination of the language value is different from the reality. The congestion situation could not be accurately predicted.
 本発明は、これらの課題に鑑み、所定時間後における混雑状況の予測の正確性を向上することが可能な混雑状況予測システム、方法及びプログラムを提供することを目的とする。 In view of these problems, an object of the present invention is to provide a congestion situation prediction system, method, and program capable of improving the accuracy of prediction of a congestion situation after a predetermined time.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、
 カメラで撮像された第1画像を画像解析する第1画像解析手段と、
 前記第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習する学習手段と、
 前記カメラで撮像された第2画像を取得する第2画像取得手段と、
 前記取得された第2画像を画像解析し、混雑状況を取得する第2画像解析手段と、
 前記学習された第1画像の混雑状況と、前記画像解析された第2画像の混雑状況と、から当該第2画像の所定時間後における混雑状況を予測する予測手段と、を備えることを特徴とする混雑状況予測システムを提供する。
The invention according to the first feature is
First image analysis means for analyzing the first image captured by the camera;
Learning means for learning from the result of image analysis of the first image the congestion status shown in the first image;
Second image acquisition means for acquiring a second image captured by the camera;
Image analysis of the acquired second image and second image analysis means for acquiring a congestion situation;
Prediction means for predicting the congestion situation after a predetermined time of the second image from the congestion situation of the learned first image and the congestion situation of the second image analyzed by the image. A congestion situation prediction system is provided.
 第1の特徴に係る発明によれば、混雑状況予測システムは、第1画像解析手段と、学習手段と、第2画像取得手段と、第2画像解析手段と、取得手段と、予測手段と、を備える。第1画像解析手段は、カメラで撮像された第1画像を画像解析する。学習手段は、第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習する。第2画像取得手段は、カメラで撮像された第2画像を取得する。第2画像解析手段は、取得された第2画像を画像解析し、混雑状況を取得する。取得手段は、第2画像の画像解析の結果から、当該第2画像に示される混雑状況を取得する。予測手段は、学習された第1画像の混雑状況と、画像解析された第2画像の混雑状況と、から当該第2画像の所定時間後における混雑状況を予測する。 According to the first aspect of the invention, the congestion situation prediction system includes a first image analysis unit, a learning unit, a second image acquisition unit, a second image analysis unit, an acquisition unit, a prediction unit, Is provided. The first image analysis means performs image analysis on the first image captured by the camera. The learning means learns the congestion situation shown in the first image from the result of image analysis of the first image. The second image acquisition unit acquires a second image captured by the camera. The second image analysis means performs image analysis on the acquired second image and acquires a congestion state. The acquisition unit acquires the congestion state indicated in the second image from the result of image analysis of the second image. The predicting means predicts the congestion status of the second image after a predetermined time from the learned congestion status of the first image and the analyzed congestion status of the second image.
 これにより、第1画像の画像解析の結果から学習された第1画像の混雑状況からだけでなく、画像解析された第2画像の混雑状況により、所定時間後における混雑状況を予測できる。例えば、去年と今年では、同じ日時でも、天候等により混在状況が異なる場合がある。このような場合、去年の第1画像の画像解析の結果から学習された第1画像の混雑状況だけで、今年の混雑状況を予測した場合、この予測が正確性を欠く場合がる。そこで、例えば、去年の第1画像の画像解析の結果から学習された第1画像の混雑状況に加え、近々に取得した第2画像の画像解析による混雑状況を含め予測することで、予測の正確性が向上する。 Thus, the congestion situation after a predetermined time can be predicted not only from the congestion situation of the first image learned from the image analysis result of the first image but also from the congestion situation of the second image analyzed by the image analysis. For example, in the last year and this year, even in the same date and time, the mixed situation may differ depending on the weather or the like. In such a case, when the congestion situation of this year is predicted only by the congestion situation of the first image learned from the result of image analysis of the first image of last year, this prediction may lack accuracy. Therefore, for example, in addition to the congestion situation of the first image learned from the result of the image analysis of the first image last year, the prediction including the congestion situation by the image analysis of the second image acquired in the near future makes it possible to accurately predict the prediction. Improves.
 第1の特徴に係る発明は、システムのカテゴリであるが、方法及びプログラムであっても同様の作用、効果を奏する。 The invention according to the first feature is a category of the system, but the method and the program exhibit the same operations and effects.
 したがって、所定時間後における混雑状況の予測の正確性を向上することが可能な混雑状況予測システム、方法及びプログラムを提供できる。 Therefore, it is possible to provide a congestion situation prediction system, method, and program capable of improving the accuracy of congestion situation prediction after a predetermined time.
 第2の特徴に係る発明は、第1の特徴に係る発明に加え、
 前記第1画像解析手段は、前記第1画像として、車又は人物の数を画像解析し、
 前記第2画像解析手段は、前記第2画像として、車又は人物の混雑状況を画像解析し、
 前記予測手段は、前記第2画像解析手段による画像解析から所定時間後の車又は人物の混雑状況を予測する混雑状況予測システムを提供する。
The invention according to the second feature is in addition to the invention according to the first feature,
The first image analysis means analyzes the number of cars or persons as the first image,
The second image analysis means performs image analysis on a congestion situation of a car or a person as the second image,
The prediction means provides a congestion situation prediction system that predicts a congestion situation of a car or a person after a predetermined time from the image analysis by the second image analysis means.
 第2の特徴に係る発明によれば、所定時間後における、車又は人物の混雑状況の予測の正確性を向上することが可能となる。 According to the invention relating to the second feature, it is possible to improve the accuracy of prediction of the congestion situation of a car or a person after a predetermined time.
 第3の特徴に係る発明は、第2の特徴に係る発明に加え、
 前記予測手段は、予測する日時の1年前の車又は人物の数を参照して予測する混雑状況予測システムを提供する。
The invention according to the third feature is in addition to the invention according to the second feature,
The prediction means provides a congestion situation prediction system that performs prediction by referring to the number of cars or persons one year before the date and time to be predicted.
 第3の特徴に係る発明によれば、例えば、特定の日(例えば、1月1日等)の所定時間後における、車又は人物の混雑状況の予測の正確性を向上することが可能となる。 According to the third aspect of the invention, for example, it is possible to improve the accuracy of prediction of the congestion situation of a car or a person after a predetermined time on a specific day (for example, January 1). .
 本発明によれば、所定時間後における混雑状況の予測の正確性を向上することが可能な混雑状況予測システム、方法及びプログラムを提供できる。 According to the present invention, it is possible to provide a congestion situation prediction system, method and program capable of improving the accuracy of prediction of the congestion situation after a predetermined time.
図1は、本発明の好適な実施形態である混雑状況予測システム1の概要を説明するための図である。FIG. 1 is a diagram for explaining an outline of a congestion situation prediction system 1 which is a preferred embodiment of the present invention. 図2は、混雑状況予測システム1における混雑状況予測サーバ10の機能ブロックと各機能の関係を示す図である。FIG. 2 is a diagram illustrating the relationship between the functional blocks of the congestion state prediction server 10 and the functions in the congestion state prediction system 1. 図3は、混雑状況予測サーバ10が実行する混雑状況予測処理のフローチャートである。FIG. 3 is a flowchart of the congestion situation prediction process executed by the congestion situation prediction server 10. 図4は、記憶部12に記憶された混雑状況予測データ200を説明する図である。FIG. 4 is a diagram for explaining the congestion situation prediction data 200 stored in the storage unit 12. 図5は、混雑状況予測サーバ10が実行する混雑状況予測処理の一例を説明する図である。FIG. 5 is a diagram for explaining an example of the congestion situation prediction process executed by the congestion situation prediction server 10. 図6は、混雑状況予測サーバ10が実行する混雑状況予測処理の一例を説明する図である。FIG. 6 is a diagram for explaining an example of the congestion situation prediction process executed by the congestion situation prediction server 10. 図7は、混雑状況予測サーバ10が実行する混雑状況予測処理の一例を説明する図である。FIG. 7 is a diagram for explaining an example of the congestion situation prediction process executed by the congestion situation prediction server 10.
 以下、本発明を実施するための最良の形態について図を参照しながら説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings. This is merely an example, and the technical scope of the present invention is not limited to this.
 [混雑状況予測システムの概要]
 図1は、本発明の好適な実施形態である混雑状況予測システム1の概要を説明するための図である。この図1に基づいて、混雑状況予測システム1の概要を説明する。
[Overview of the congestion situation prediction system]
FIG. 1 is a diagram for explaining an outline of a congestion situation prediction system 1 which is a preferred embodiment of the present invention. Based on this FIG. 1, the outline | summary of the congestion condition prediction system 1 is demonstrated.
 混雑状況予測システム1は、過去(例えば、1年前)にカメラで撮像された画像(第1画像)を画像解析し、学習した学習データと、現在にカメラで撮像された画像(第2画像)を画像解析したデータと、に基づき、例えば、現在から数時間後の車又は人物の混雑状況を予測するシステムである。 The congestion situation prediction system 1 performs image analysis on an image (first image) captured by a camera in the past (for example, one year ago), learned learning data, and an image (second image) currently captured by a camera. ) Based on the image analysis data, for example, a system for predicting the congestion situation of a car or a person several hours after the present.
 混雑状況予測システム1は、所定の場所において、例えば1年前にカメラで、微少時間間隔で撮像された複数の第1画像から、それぞれの時間における車又は人物の数を画像解析する。なお、「所定の場所」は、例えば、人混みが発生し易い観光地や、高速道路のジャンクション等の渋滞が発生し易い、任意の場所である。 The congestion status prediction system 1 performs image analysis on the number of cars or persons at each time from a plurality of first images captured at a minute time interval with a camera at a predetermined location, for example, one year ago. Note that the “predetermined place” is, for example, a tourist spot that is likely to be crowded, or an arbitrary place that is likely to cause traffic congestion such as a junction on an expressway.
 また、混雑状況予測システム1は、第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習し、所定の時間間隔(図1に示す例では15分間隔)の車又は人物の数の推移を予測するための混雑状況予測データを作成する。 Further, the congestion situation prediction system 1 learns the congestion situation shown in the first image from the result of the image analysis of the first image, and the vehicle or the vehicle at a predetermined time interval (15 minutes interval in the example shown in FIG. 1) Congestion situation prediction data for predicting the transition of the number of persons is created.
 また、混雑状況予測システム1は、所定の場所において、現在、カメラにより微少時間間隔で撮像された第2画像から、車又は人物の数を画像解析し、所定の場所における現在の混雑状況を示す混雑状況解析データを作成する。 In addition, the congestion status prediction system 1 analyzes the number of cars or persons from a second image currently captured at a minute time interval by a camera at a predetermined location, and indicates the current congestion status at the predetermined location. Create congestion analysis data.
 そして、混雑状況予測システム1は、第1画像から学習し作成した混雑状況予測学習データと、第2画像を解析し作成した混雑状況解析データと、から第2画像が撮像された時間から、所定時間(図1に示す例では1時間)後における混雑状況を予測する。 Then, the congestion situation prediction system 1 determines from the time when the second image is captured from the congestion situation prediction learning data created by learning from the first image and the congestion situation analysis data created by analyzing the second image. The congestion situation after time (1 hour in the example shown in FIG. 1) is predicted.
 このような混雑状況予測システム1によれば、第1画像の画像解析の結果から学習された第1画像の混雑状況(混雑状況予測学習データ)からだけでなく、画像解析された第2画像の混雑状況(混雑状況解析データ)により、所定時間後における混雑状況を予測できる。例えば、去年と今年では、同じ日時でも、天候等により混在状況が異なる場合がある。このような場合、去年の第1画像の画像解析の結果から学習された第1画像の混雑状況だけで、今年の混雑状況を予測した場合、この予測が正確性を欠く場合がる。そこで、例えば、去年の第1画像の画像解析の結果から学習された第1画像の混雑状況(混雑状況予測学習データ)に加え、近々に取得した第2画像の画像解析による混雑状況(混雑状況解析データ)を含め予測することで、予測の正確性が向上する。したがって、所定時間後における混雑状況の予測の正確性を向上することが可能となる。 According to such a congestion situation prediction system 1, not only from the congestion situation (congestion situation prediction learning data) of the first image learned from the result of image analysis of the first image, but also of the second image subjected to image analysis. The congestion situation after a predetermined time can be predicted based on the congestion situation (congestion situation analysis data). For example, in the last year and this year, even in the same date and time, the mixed situation may differ depending on the weather or the like. In such a case, when the congestion situation of this year is predicted only by the congestion situation of the first image learned from the result of image analysis of the first image of last year, this prediction may lack accuracy. Therefore, for example, in addition to the congestion status (congestion status prediction learning data) of the first image learned from the result of image analysis of the first image last year, the congestion status (congestion status) of the second image acquired soon Prediction including analysis data) improves the accuracy of the prediction. Therefore, it is possible to improve the accuracy of prediction of the congestion situation after a predetermined time.
 [混雑状況予測システムの各機能の説明]
 図2は、混雑状況予測システム1における混雑状況予測サーバ10の機能ブロックと各機能の関係を示す図である。混雑状況予測システム1は、混雑状況を予測した混雑状況予測データを作成する混雑状況予測サーバ10と、混雑状況予測サーバ10にネットワークを介して接続され、混雑状況予測サーバ10から混雑状況予測データを受信する端末20と、を含む。
[Description of each function of the congestion status prediction system]
FIG. 2 is a diagram illustrating the relationship between the functional blocks of the congestion state prediction server 10 and the functions in the congestion state prediction system 1. The congestion status prediction system 1 is connected to a congestion status prediction server 10 that creates congestion status prediction data that predicts the congestion status, and the congestion status prediction server 10 via a network. Receiving terminal 20.
 混雑状況予測サーバ10は、制御部11として、CPU(Central Processing Unit),RAM(Random Access Memory),ROM(Read Only Memory)等を備え、記憶部12として、ハードディスクや半導体メモリによる、データのストレージ部を備え、通信部13として、例えば、IEEE802.11に準拠したWiFi(Wireless Fidelity)対応デバイス(有線であってもよい)等を備える。記憶部12は、制御プログラム100、混雑状況予測データ200、その他、混雑状況予測に必要なデータ(例えば、気象データ等)や、混雑状況予測サーバ10の制御に必要なデータを記憶する。また、混雑状況予測サーバ10は、撮像部14として、CCDカメラ等を備える。なお、混雑状況予測サーバ10は、撮像部14を備えず、外部に設けられたカメラ等の撮像機器で撮像された画像データを受信してもよい。 The congestion status prediction server 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like as the control unit 11, and a storage unit 12 that stores data using a hard disk or a semiconductor memory. The communication unit 13 includes, for example, a WiFi (Wireless Fidelity) compatible device (may be wired) that conforms to IEEE 802.11. The storage unit 12 stores the control program 100, congestion situation prediction data 200, other data necessary for congestion situation prediction (for example, weather data) and data necessary for control of the congestion situation prediction server 10. Further, the congestion status prediction server 10 includes a CCD camera or the like as the imaging unit 14. The congestion status prediction server 10 may not include the imaging unit 14 and may receive image data captured by an imaging device such as a camera provided outside.
 混雑状況予測サーバ10において、制御部11が制御プログラム100を読み込むことで、記憶部12、通信部13及び撮像部14と協働して、第1画像取得モジュール101及び第2画像取得モジュール104を実現する。また、混雑状況予測サーバ10において、制御部11が制御プログラム100を読み込むことで、記憶部12と協働して、第1画像解析モジュール102、学習モジュール103及び第2画像解析モジュール105を実現する。また、混雑状況予測サーバ10において、制御部11が制御プログラム100を読み込むことで、記憶部12及び通信部13と協働して、予測モジュール106を実現する。 In the congestion status prediction server 10, the control unit 11 reads the control program 100, so that the first image acquisition module 101 and the second image acquisition module 104 are operated in cooperation with the storage unit 12, the communication unit 13, and the imaging unit 14. Realize. Further, in the congestion status prediction server 10, the control unit 11 reads the control program 100, thereby realizing the first image analysis module 102, the learning module 103, and the second image analysis module 105 in cooperation with the storage unit 12. . Further, in the congestion state prediction server 10, the control unit 11 reads the control program 100, thereby realizing the prediction module 106 in cooperation with the storage unit 12 and the communication unit 13.
 また、本実施形態において、端末20は、一般的な情報端末であってよく、混雑状況予測サーバ10に混雑状況予測データを要求し、混雑状況予測サーバ10から混雑状況予測データを受信し、混雑状況予測データに基づく予測された混雑状況を表示等が可能な情報機器であり、例えば、スマートフォン、ノート型PC、カーナビゲーションシステム、ネットブック端末、スレート端末、電子書籍端末、電子辞書端末、携帯型音楽プレーヤ、携帯型コンテンツ再生・録画プレーヤといった携帯型端末であってもよい。 Further, in the present embodiment, the terminal 20 may be a general information terminal, requests the congestion status prediction data from the congestion status prediction server 10, receives the congestion status prediction data from the congestion status prediction server 10, and is congested. An information device capable of displaying a predicted congestion state based on the situation prediction data, for example, a smartphone, a notebook PC, a car navigation system, a netbook terminal, a slate terminal, an electronic book terminal, an electronic dictionary terminal, a portable type It may be a portable terminal such as a music player or a portable content playback / recording player.
 [混雑状況予測処理]
 図3は、混雑状況予測サーバ10が実行する混雑状況予測処理のフローチャートである。上述した混雑状況予測サーバ10の各種モジュールが行う混雑状況予測処理について説明する。
[Congestion status prediction processing]
FIG. 3 is a flowchart of the congestion situation prediction process executed by the congestion situation prediction server 10. The congestion status prediction process performed by the various modules of the congestion status prediction server 10 described above will be described.
 ステップS1において、第1画像取得モジュール101は、撮像部14により微少間隔で撮像された第1画像を取得し、記憶部12に記憶する。詳細には、第1画像取得モジュール101は、第1画像に、第1画像が撮像された位置(例えば、緯度・経度)日付、時刻、曜日、天気等の混雑状況の予測に必要なデータを対応付けて、記憶部12に記憶する。 In step S <b> 1, the first image acquisition module 101 acquires a first image captured at a minute interval by the imaging unit 14 and stores the first image in the storage unit 12. Specifically, the first image acquisition module 101 adds data necessary for predicting the congestion state such as date, time, day of the week, weather, and the like at the position where the first image is captured (for example, latitude / longitude). The data is stored in the storage unit 12 in association with each other.
 ステップS2において、第1画像解析モジュール102は、ステップS1で第1画像取得モジュール101が記憶部12に記憶した第1画像を画像解析する。詳細には、第1画像解析モジュール102は、第1画像を画像解析し、第1画像に撮影されている車又は人物の数を検出する。また、第1画像解析モジュール102は、微少間隔で撮像された複数の第1画像間における検出した車又は人物の変位と、複数の第1画像間の時間間隔から、車又は人物の移動速度を算出する。更に、このような移動速度を、所定時間(例えば、15分)、順次算出し、これらの平均した移動平均速度を算出する。 In step S2, the first image analysis module 102 performs image analysis on the first image stored in the storage unit 12 by the first image acquisition module 101 in step S1. Specifically, the first image analysis module 102 performs image analysis on the first image and detects the number of cars or persons photographed in the first image. Further, the first image analysis module 102 determines the moving speed of the car or person based on the detected displacement of the car or person between the plurality of first images captured at a minute interval and the time interval between the plurality of first images. calculate. Further, such moving speeds are sequentially calculated for a predetermined time (for example, 15 minutes), and the average moving average speed is calculated.
 ステップS3において、学習モジュール103は、ステップS2で第1画像解析モジュール102による第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習し、車又は人物の数の推移を予測するための混雑状況予測データ200を作成し、記憶部12に記憶する。 In step S3, the learning module 103 learns the congestion status shown in the first image from the result of image analysis of the first image by the first image analysis module 102 in step S2, and changes the number of cars or persons. Congestion status prediction data 200 for prediction is created and stored in the storage unit 12.
 具体的には、学習モジュール103は、日付、時刻、曜日、天気を特徴量として、学習し、ある日付のある時間帯における車又は人物の増減の傾向と増加率を導出する。例えば、学習モジュール103による学習処理は、単純に、第1画像を撮像した日時と車又は人物の数とから近似直線や近似曲線(最小二乗法)を導出する処理であってもよい。この場合、近似直線や近似曲線が混雑状況予測基準データとなる。また、この学習処理は、いわゆる機械学習であってよい。機械学習の具体的なアルゴリズムとしては、最近傍法、ナイーブベイズ法、決定木、サポートベクターマシンを利用してよい。また、ニューラルネットワークを利用して、学習するための特徴量を自ら生成する深層学習(ディープラーニング)であってもよい。過去の第1画像を撮像した日時と車又は人物の数を教師ありデータとして学習してもよい。また、例えば、最近傍法やk近傍法であれば、過去の実例として、第1画像を撮像した日時における車又は人物の数を特徴空間に配置しておき、新しく判定したいデータが与えられた際に、特徴空間上で最も距離が近い過去の実例(1個又はk個)のクラス(混雑状況を予測したい日時)を予測結果とする。 Specifically, the learning module 103 learns by using the date, time, day of the week, and weather as feature quantities, and derives the tendency of the increase or decrease in the number of cars or persons and the rate of increase in a certain time zone of a certain date. For example, the learning process by the learning module 103 may simply be a process of deriving an approximate line or approximate curve (least square method) from the date and time when the first image was captured and the number of cars or people. In this case, the approximate straight line or the approximate curve is the congestion condition prediction reference data. This learning process may be so-called machine learning. As a specific algorithm for machine learning, a nearest neighbor method, a naive Bayes method, a decision tree, or a support vector machine may be used. Further, deep learning may be used in which a characteristic amount for learning is generated by using a neural network. The date and time when the first image in the past was captured and the number of cars or persons may be learned as supervised data. For example, in the case of the nearest neighbor method or the k-nearest neighbor method, as the past example, the number of cars or persons at the date and time when the first image was captured is arranged in the feature space, and data to be newly determined is given. At this time, the past class (one or k) of the closest distance in the feature space (date and time when the congestion situation is to be predicted) is set as the prediction result.
 図4は、記憶部12に記憶された混雑状況予測データ200を説明する図である。混雑状況予測データ200は、データIDに、基準となる第1画像が撮像された日付及び時刻と、当該日時における天気と、第1画像解析モジュール102により画像解析された結果である認識対象数(図4に示す例では車台数)及び移動平均速度と、学習モジュール103により導出された増減傾向及び増加率と、が対応付けられている。このような混雑状況予測データ200は、第1画像が撮像された位置毎に生成され、記憶部12に記憶されている。 FIG. 4 is a diagram for explaining the congestion situation prediction data 200 stored in the storage unit 12. The congestion status prediction data 200 includes, as a data ID, the date and time when the reference first image was captured, the weather at the date and time, and the number of recognition targets (results of image analysis performed by the first image analysis module 102). In the example illustrated in FIG. 4, the number of vehicles) and the moving average speed are associated with the increase / decrease tendency and the increase rate derived by the learning module 103. Such congestion state prediction data 200 is generated for each position where the first image is captured and stored in the storage unit 12.
 ステップS4において、第2画像取得モジュール104は、撮像部14により微少間隔で撮像された第2画像を取得し、記憶部12に記憶する。詳細には、第2画像取得モジュール104は、第2画像に、第2画像が撮像された位置(例えば、緯度・経度)日付、時刻、曜日、天気等の混雑状況の予測に必要なデータを対応付けて、記憶部12に記憶する。 In step S4, the second image acquisition module 104 acquires the second image captured by the imaging unit 14 at a minute interval and stores the second image in the storage unit 12. Specifically, the second image acquisition module 104 adds data necessary for predicting the congestion state such as date, time, day of the week, weather, and the like at the position (for example, latitude / longitude) at which the second image was captured. The data is stored in the storage unit 12 in association with each other.
 ステップS5において、第2画像解析モジュール105は、ステップS4で第2画像取得モジュール104が記憶部12に記憶した第2画像を画像解析する。詳細には、第2画像解析モジュール105は、第2画像を画像解析し、第2画像に撮影されている車又は人物の数を検出する。また、第2画像解析モジュール105は、微少間隔で撮像された複数の第2画像間における検出した車又は人物の変位と、複数の第2画像間の時間間隔から、車又は人物の移動速度を算出する。更に、このような移動速度を、所定時間(例えば、15分)、順次算出し、これらを平均した移動平均速度を算出する。 In step S5, the second image analysis module 105 performs image analysis on the second image stored in the storage unit 12 by the second image acquisition module 104 in step S4. Specifically, the second image analysis module 105 analyzes the second image and detects the number of cars or persons photographed in the second image. In addition, the second image analysis module 105 determines the moving speed of the car or person from the detected displacement of the car or person between the plurality of second images taken at a minute interval and the time interval between the plurality of second images. calculate. Further, such moving speeds are sequentially calculated for a predetermined time (for example, 15 minutes), and a moving average speed obtained by averaging these is calculated.
 ステップS6において、予測モジュール106は、ステップS5で学習モジュール103により学習された第1画像の混雑状況と、ステップS5で第2画像解析モジュール105により画像解析された第2画像の混雑状況と、から当該第2画像が撮像された時間から所定時間後(例えば15分後)における混雑状況を予測する。例えば、予測モジュール106は、予測する日時の1年前の車又は人物の数を参照して混雑状況を予測する。また、予測モジュール106は、所定の場所における混雑状況の予測を示す混雑状況予測データを、端末20に送信する。 In step S6, the prediction module 106 determines from the congestion status of the first image learned by the learning module 103 in step S5 and the congestion status of the second image analyzed by the second image analysis module 105 in step S5. A congestion situation is predicted after a predetermined time (for example, after 15 minutes) from the time when the second image is captured. For example, the prediction module 106 predicts the congestion situation with reference to the number of cars or persons one year before the predicted date and time. In addition, the prediction module 106 transmits congestion state prediction data indicating the prediction of the congestion state at a predetermined location to the terminal 20.
 図5から図7は、混雑状況予測サーバ10が実行する混雑状況予測処理の一例を説明する図である。具体的には、第1画像解析モジュール102は、図5に示す第1画像及び図6に示す図5に示す第1画像の約15分後に撮像された第1画像の画像解析において、例えば、図5及び図6に示す破線で囲む範囲の輝度差により、対象物(図5及び図6に示す例では車)の輪郭を抽出し、第1画像に撮影されている対象物の数を検出し(図5に示す例では1台、図6に示す例では3台)、記憶部12の混雑状況予測データ200に記憶する。 5 to 7 are diagrams for explaining an example of the congestion state prediction process executed by the congestion state prediction server 10. Specifically, the first image analysis module 102 performs image analysis of the first image captured about 15 minutes after the first image shown in FIG. 5 and the first image shown in FIG. The contour of the object (car in the example shown in FIGS. 5 and 6) is extracted based on the luminance difference in the range surrounded by the broken lines shown in FIGS. 5 and 6, and the number of objects photographed in the first image is detected ( 5 is stored in the congestion state prediction data 200 of the storage unit 12.
 また、第1画像解析モジュール102は、混雑状況予測データ200に記憶するデータとなる(データIDを付与する)第1画像を基準に、この基準となる第1画像の撮像時間の前後において微少間隔で撮像された第1画像から、抽出した輪郭の移動速度を算出する。そして、第1画像解析モジュール102は、このような移動速度を、所定時間(例えば、15分)、順次算出し、これらを平均した移動平均速度を算出し、記憶部12の混雑状況予測データ200に記憶する。 The first image analysis module 102 uses a first image that is stored in the congestion state prediction data 200 (provided with a data ID) as a reference, and has a very small interval before and after the imaging time of the reference first image. The moving speed of the extracted contour is calculated from the first image picked up in (1). Then, the first image analysis module 102 sequentially calculates such a moving speed for a predetermined time (for example, 15 minutes), calculates a moving average speed obtained by averaging these, and the congestion state prediction data 200 of the storage unit 12. To remember.
 そして、学習モジュール103は、混雑状況予測データ200に記憶された日付、時刻、曜日、天気、認識対象数及び移動平均速度から学習し、データID毎に増減傾向及び増加率を導出し、記憶部12の混雑状況予測データ200に記憶する。 The learning module 103 learns from the date, time, day of the week, weather, the number of recognition targets and the moving average speed stored in the congestion situation prediction data 200, derives the increase / decrease tendency and the increase rate for each data ID, 12 congestion state prediction data 200 are stored.
 次に、第2画像解析モジュール105は、第1画像解析モジュール102と同様の処理により、図7に示す第2画像を画像解析し、対象物(図7に示す例では車)の輪郭を抽出し、第2画像に撮影されている対象物の数を検出し(図7に示す例では4台)、移動速度や移動平均速度を算出する。 Next, the second image analysis module 105 performs image analysis on the second image shown in FIG. 7 by the same processing as the first image analysis module 102, and extracts the contour of the object (the car in the example shown in FIG. 7). Then, the number of objects photographed in the second image is detected (four in the example shown in FIG. 7), and the moving speed and moving average speed are calculated.
 そして、予測モジュール106は、所定の日時(図7に示す例では2016/8/14 21:00:15)に撮像された第2画像に基づき、第2画像解析モジュール105が検出した対象物の数と移動速度に、混雑状況予測データ200に記憶された、当該第2画像が撮像された所定の日時の略1年前の第1画像(図5及び図6に示す第1画像)に基づき作成されたデータIDの増減傾向及び増加率を乗算し、例えば、第2画像が撮像された所定の日時から、所定時間後の対象物の数と移動速度(混雑状況)を予測する。 Then, the prediction module 106 determines the number of objects detected by the second image analysis module 105 based on the second image captured at a predetermined date and time (2016/8/14 21:00:15 in the example shown in FIG. 7). The moving speed is created based on the first image (the first image shown in FIGS. 5 and 6) stored in the congestion state prediction data 200 and approximately one year before the predetermined date and time when the second image was captured. For example, the number of objects and the moving speed (congestion status) after a predetermined time are predicted from a predetermined date and time when the second image is captured.
 上述した手段、機能は、コンピュータ(CPU,情報処理装置,各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、フレキシブルディスク、CD(CD-ROM等)、DVD(DVD-ROM、DVD-RAM等)等のコンピュータ読取可能な記録媒体に記録された形態で提供される。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置又は外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置からコンピュータに提供するようにしてもよい。 The means and functions described above are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program. The program is provided in a form recorded on a computer-readable recording medium such as a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD-RAM, etc.), for example. In this case, the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it. The program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to the computer.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。 As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. The effects described in the embodiments of the present invention are only the most preferable effects resulting from the present invention, and the effects of the present invention are limited to those described in the embodiments of the present invention. is not.
 1 混雑状況予測システム、101 第1画像取得モジュール、102 第1画像解析モジュール、103 学習モジュール、104 第2画像取得モジュール、105 第2画像解析モジュール、106 予測モジュール、100 制御プログラム、200 混雑状況予測データ、20 端末
 
DESCRIPTION OF SYMBOLS 1 Congestion situation prediction system, 101 1st image acquisition module, 102 1st image analysis module, 103 Learning module, 104 2nd image acquisition module, 105 2nd image analysis module, 106 Prediction module, 100 Control program, 200 Congestion situation prediction Data, 20 terminals

Claims (5)

  1.  カメラで撮像された第1画像を画像解析する第1画像解析手段と、
     前記第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習する学習手段と、
     前記カメラで撮像された第2画像を取得する第2画像取得手段と、
     前記取得された第2画像を画像解析し、混雑状況を取得する第2画像解析手段と、
     前記学習された第1画像の混雑状況と、前記画像解析された第2画像の混雑状況と、から当該第2画像の所定時間後における混雑状況を予測する予測手段と、を備えることを特徴とする混雑状況予測システム。
    First image analysis means for analyzing the first image captured by the camera;
    Learning means for learning from the result of image analysis of the first image the congestion status shown in the first image;
    Second image acquisition means for acquiring a second image captured by the camera;
    Image analysis of the acquired second image and second image analysis means for acquiring a congestion situation;
    Prediction means for predicting the congestion situation after a predetermined time of the second image from the congestion situation of the learned first image and the congestion situation of the second image analyzed by the image. Congestion situation prediction system.
  2.  前記第1画像解析手段は、前記第1画像として、車又は人物の数を画像解析し、
     前記第2画像解析手段は、前記第2画像として、車又は人物の混雑状況を画像解析し、
     前記予測手段は、前記第2画像解析手段による画像解析から所定時間後の車又は人物の混雑状況を予測する請求項1に記載の混雑状況予測システム。
    The first image analysis means analyzes the number of cars or persons as the first image,
    The second image analysis means performs image analysis on a congestion situation of a car or a person as the second image,
    The congestion state prediction system according to claim 1, wherein the prediction unit predicts a congestion state of a car or a person after a predetermined time from the image analysis by the second image analysis unit.
  3.  前記予測手段は、予測する日時の1年前の車又は人物の数を参照して予測する請求項2に記載の混雑状況予測システム。 The congestion status prediction system according to claim 2, wherein the prediction means predicts with reference to the number of cars or persons one year before the date and time to be predicted.
  4.  カメラで撮像された画像から混雑状況を予測する混雑状況予測システムが実行する方法であって、
     カメラで撮像された第1画像を画像解析するステップと、
     前記第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習するステップと、
     前記カメラで撮像された第2画像を取得するステップと、
     前記取得された第2画像を画像解析し、混雑状況を取得するステップと、
     前記学習された第1画像の混雑状況と、前記画像解析された第2画像の混雑状況と、から当該第2画像の所定時間後における混雑状況を予測するステップと、を備えることを特徴とする混雑状況予測方法。
    It is a method executed by a congestion situation prediction system that predicts a congestion situation from an image captured by a camera,
    Analyzing the first image captured by the camera;
    Learning the congestion status shown in the first image from the result of image analysis of the first image;
    Obtaining a second image captured by the camera;
    Analyzing the acquired second image and acquiring a congestion situation;
    Predicting the congestion status of the second image after a predetermined time from the learned congestion status of the first image and the analyzed congestion status of the second image. Congestion situation prediction method.
  5.  カメラで撮像された画像から混雑状況を予測する混雑状況予測システムを制御するコンピュータを、
     カメラで撮像された第1画像を画像解析する第1画像解析手段、
     前記第1画像の画像解析の結果から、当該第1画像に示される混雑状況を学習する学習手段、
     前記カメラで撮像された第2画像を取得する第2画像取得手段、
     前記取得された第2画像を画像解析し、混雑状況を取得する第2画像解析手段、
     前記学習された第1画像の混雑状況と、前記画像解析された第2画像の混雑状況と、から当該第2画像の所定時間後における混雑状況を予測する予測手段、
     として機能させるプログラム。
    A computer that controls a congestion situation prediction system that predicts a congestion situation from an image captured by a camera,
    First image analysis means for analyzing the first image captured by the camera;
    Learning means for learning from the result of image analysis of the first image the congestion status shown in the first image;
    Second image acquisition means for acquiring a second image captured by the camera;
    Image analysis of the acquired second image and second image analysis means for acquiring a congestion situation;
    A predicting means for predicting a congestion situation after a predetermined time of the second image from the learned congestion situation of the first image and the congestion situation of the second image analyzed by the image;
    Program to function as.
PCT/JP2016/085562 2016-11-30 2016-11-30 Congestion state predicting system and method, and program WO2018100675A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/085562 WO2018100675A1 (en) 2016-11-30 2016-11-30 Congestion state predicting system and method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/085562 WO2018100675A1 (en) 2016-11-30 2016-11-30 Congestion state predicting system and method, and program

Publications (1)

Publication Number Publication Date
WO2018100675A1 true WO2018100675A1 (en) 2018-06-07

Family

ID=62242583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/085562 WO2018100675A1 (en) 2016-11-30 2016-11-30 Congestion state predicting system and method, and program

Country Status (1)

Country Link
WO (1) WO2018100675A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021018727A (en) * 2019-07-23 2021-02-15 株式会社ナビタイムジャパン Information processing system, information processing program and information processing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004192425A (en) * 2002-12-12 2004-07-08 Matsushita Electric Ind Co Ltd Congestion degree forecasting system
JP2015076078A (en) * 2013-10-11 2015-04-20 パイオニア株式会社 Congestion prediction system, terminal device, congestion prediction method, and congestion prediction program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004192425A (en) * 2002-12-12 2004-07-08 Matsushita Electric Ind Co Ltd Congestion degree forecasting system
JP2015076078A (en) * 2013-10-11 2015-04-20 パイオニア株式会社 Congestion prediction system, terminal device, congestion prediction method, and congestion prediction program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021018727A (en) * 2019-07-23 2021-02-15 株式会社ナビタイムジャパン Information processing system, information processing program and information processing method
JP7410540B2 (en) 2019-07-23 2024-01-10 株式会社ナビタイムジャパン Information processing system, information processing program, and information processing method

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
US10540554B2 (en) Real-time detection of traffic situation
CN103184719B (en) Road surface survey device
JP2020520520A (en) Using telematics data to identify trip types
US20160210860A1 (en) Method for processing measurement data of a vehicle in order to determine the start of a search for a parking space
JP2019526840A (en) Map updating method and system based on control feedback of autonomous vehicle
JP2021514885A (en) Feature extraction method based on deep learning used for LIDAR positioning of autonomous vehicles
CN111833600B (en) Method and device for predicting transit time and data processing equipment
CN107402397B (en) User activity state determination method and device based on mobile terminal and mobile terminal
CN103366221A (en) Information processing apparatus and information processing method
JP6700373B2 (en) Apparatus and method for learning object image packaging for artificial intelligence of video animation
EP3706095A1 (en) Evaluation device, evaluation system, vehicle, and program
JP2019114094A (en) Road traffic control system and autonomous vehicle control method
US11816543B2 (en) System and method for using knowledge gathered by a vehicle
WO2018100675A1 (en) Congestion state predicting system and method, and program
JP2007058751A (en) Apparatus, method, and program for discriminating object
US20220383736A1 (en) Method for estimating coverage of the area of traffic scenarios
CN107368553B (en) Method and device for providing search suggestion words based on activity state
JP2019053578A (en) Traffic volume determination system, traffic volume determination method, and traffic volume determination program
JP6687648B2 (en) Estimating device, estimating method, and estimating program
Das et al. Why slammed the brakes on? auto-annotating driving behaviors from adaptive causal modeling
CN105976453A (en) Image transformation-based driving alarm method and apparatus thereof
JP2016181061A (en) Driving environment evaluation system
JP6606779B6 (en) Information providing apparatus, information providing method, and program
JP6686076B2 (en) Information processing apparatus, information processing method, program, and application program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16922722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP