WO2019003355A1 - System for providing image analysis result, method for providing image analysis result, and program - Google Patents

System for providing image analysis result, method for providing image analysis result, and program Download PDF

Info

Publication number
WO2019003355A1
WO2019003355A1 PCT/JP2017/023807 JP2017023807W WO2019003355A1 WO 2019003355 A1 WO2019003355 A1 WO 2019003355A1 JP 2017023807 W JP2017023807 W JP 2017023807W WO 2019003355 A1 WO2019003355 A1 WO 2019003355A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image analysis
learned model
unknown
learned
Prior art date
Application number
PCT/JP2017/023807
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2017/023807 priority Critical patent/WO2019003355A1/en
Priority to JP2018545247A priority patent/JP6474946B1/en
Publication of WO2019003355A1 publication Critical patent/WO2019003355A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention selects a learned model of a known image whose imaging condition is similar to an unknown image to be newly subjected to image analysis from among a plurality of patterns of machine-learned learned models in which artificial intelligence has image-analyzed a known image.
  • the present invention relates to an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time.
  • Patent Document 1 There has been proposed a method of providing a mechanism for automatically classifying a person image by determining who is shown by performing an image analysis process on the person image.
  • Supervised learning is a well-known method as a machine learning method for artificial intelligence to perform image analysis.
  • the entrance detection is performed by image analysis to determine whether a person has entered a level crossing
  • the position of the monitoring camera is usually fixed. Therefore, there is a supervised at each level crossing such as level crossing A, level crossing B, level crossing C, etc.
  • Machine learning it is possible to improve the accuracy of entry detection for each of crossing A, crossing B, crossing C, and for the image analysis processing after that, the learned model of crossing A has been learned of crossing B Model and level crossing C learned models are used respectively.
  • a large number of images with tags to be supervised data for supervised learning for level crossing D are prepared and machine learning is performed from one on the basis of that. Because it is necessary to have the following procedure, it takes time and effort until entrance detection is introduced.
  • the inventor selects and uses a learned model of a level crossing similar to the level crossing D and the imaging condition from the level crossing A, the level crossing B, and the level crossing C already subjected to machine learning. So, also at the crossing D, we focused on the fact that the image analysis accuracy of the approach detection above a certain degree can be obtained from the beginning.
  • the present invention selects a learned model of a known image whose imaging condition is similar to an unknown image to be newly subjected to image analysis from among a plurality of patterns of machine-learned learned models in which artificial intelligence has image-analyzed a known image. It is an object of the present invention to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time.
  • the present invention provides the following solutions.
  • the invention according to the first feature is Storage means for storing a machine-learned learned model obtained by image analysis of a known image; Acquisition means for acquiring an unknown image for which a learned model has not yet been created; Selecting means for selecting a learned model of the known image having similar imaging conditions to the acquired unknown image from the stored learned models; Image analysis means for performing image analysis of the unknown image using the selected learned model; Providing means for providing the result of the image analysis; Providing an image analysis result providing system.
  • storage means for storing a machine-learned learned model obtained by image analysis of a known image, acquisition means for acquiring an unknown image for which a learned model has not yet been created, and The unknown image is selected using selection means for selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models, and using the selected learned model.
  • An image analysis means for analyzing an image, and a providing means for providing a result of the image analysis.
  • the invention according to the first aspect is a category of an image analysis result providing system, but the same operation and effect can be obtained even with an image analysis result providing method and a program.
  • An invention according to a second feature is an image analysis result providing system according to the first feature,
  • the image analysis result providing system is characterized in that the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
  • the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
  • An invention according to a third feature is an image analysis result providing system according to the first feature or the second feature, wherein The image analysis result providing system is characterized in that the learned model is a mathematical expression and parameters used for the image analysis calculated by the machine learning and an image used for the machine learning.
  • the learned model is used for the image analysis calculated by the machine learning. They are an equation and parameters and an image used for the machine learning.
  • An invention according to a fourth feature is an image analysis result providing system according to any of the first feature through the third feature, wherein There is provided an image analysis result providing system including: a creation unit configured to create a new machine-learned learned model subjected to image analysis for the unknown image.
  • the image analysis result providing system which is the invention as set forth in any one of the first feature to the third feature, new machine learning has already been performed in which the unknown image is analyzed It comprises a creation means for creating a learned model.
  • An invention according to a fifth feature is an image analysis result providing system according to the fourth feature, wherein The image analysis means characterized in that the image analysis of the unknown image is performed using the selected learned model in a period until the new machine-learned learned model is created. Provide a provision system.
  • the image analysis means is in a period until creating the new machine-learned learned model.
  • the unknown image is subjected to image analysis using the selected learned model.
  • An invention according to a sixth feature is an image analysis result providing system according to any of the first feature through the fifth feature, wherein Creating a new machine-learned learned model in which the unknown image is image-analyzed using the analysis result of the image analysis of the unknown image as teacher data using the selected learned model for the unknown image
  • an image analysis result providing system characterized by including a creating unit.
  • the selected learned model is used for the unknown image.
  • the invention according to the seventh feature is Storing a machine-learned learned model obtained by image analysis of a known image; Acquiring an unknown image for which a learned model has not yet been created; Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models; Analyzing the unknown image using the selected learned model; Providing the result of the image analysis; Providing an image analysis result providing method.
  • the invention according to the eighth feature is Image analysis result provision system, Storing a machine-learned learned model obtained by image analysis of known images; Acquiring an unknown image for which a learned model has not yet been created; Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models; Analyzing the unknown image using the selected learned model; Providing the result of the image analysis; Provide a program to run the program.
  • the present invention from among a plurality of machine-trained learned patterns in which artificial intelligence has analyzed a known image, a learned model of a known image whose imaging condition is similar to an unknown image to be newly analyzed It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time by selecting and using Become.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions.
  • FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result.
  • FIG. 4 is a view showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions in the case of performing a learned model creation process of an unknown image.
  • FIG. 5 is a flowchart of the camera 100 and the computer 200 in the case of performing a learned model creation process of an unknown image.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions.
  • FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result
  • FIG. 6 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions when the image analysis process is switched depending on whether the learned model creation process of the unknown image is finished.
  • FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible. It is a flowchart figure of preparation processing.
  • FIG. 8 is a process corresponding to B of the flowchart of FIG. 6, and when the learned model creation process of the unknown image is not finished, the learned model selection process performed by the computer 200 when the machine learning of the unknown image is impossible.
  • FIG. 9 is a flowchart of the computer 200 in the case of performing machine learning of an unknown image by adding an image analysis result of the unknown image as teacher data when performing a learned model creation process of the unknown image.
  • FIG. 10 shows, as learned models of level crossing A, level crossing B and level crossing C, examples of mathematical expressions and parameters calculated by machine learning, examples of images used for machine learning, and examples of images of level crossing D which are unknown images
  • FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400.
  • FIG. 12 is an example of a table showing a data structure of a learned model for each camera.
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described based on FIG.
  • the image analysis result providing system includes a camera 100, a computer 200, and a communication network 300.
  • the number of cameras 100 is not limited to one, and may be plural.
  • the computer 200 is not limited to an existing device, and may be a virtual device.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 also includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240, as also shown in FIG.
  • the control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device that includes an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • a web camera is illustrated as an example, but an imaging device provided with necessary functions such as a digital camera, digital video, a camera mounted on an unmanned aerial vehicle, a camera of a wearable device, a security camera, an on-vehicle camera, and a 360 degree camera It may be.
  • the captured image may be stored in the storage unit 130.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electric appliances such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. And wearable terminals such as smart glasses and head mounted displays.
  • the storage module 231 of the computer 200 stores a plurality of learned models in the storage unit 230 (step S01).
  • the learned model may be acquired from another computer or storage medium, or may be created by the computer 200.
  • the storage unit 230 may be provided with a dedicated database.
  • FIG. 10 is a diagram showing examples of mathematical expressions and parameters calculated by machine learning and examples of images used for machine learning as learned models of level crossing A, level crossing B and level crossing C. In addition, it is a figure which shows an example of the unknown image of the level crossing D in which the learning completed model is not yet created.
  • FIG. 12 is an example of a table showing a data structure of a learned model for each camera.
  • a learned model is a model in which an equation for analyzing an image of a subject is associated with a parameter for each camera.
  • the image file with supervised data used in machine learning for calculating a learned model may be associated.
  • an imaging condition for each camera an imaging angle and an imaging position may be associated and stored.
  • the learned model created using the supervised data by the camera A which captured the crossing A in FIG. 10 is the learned model created using the supervised data by the camera B which captured the learned model A and the crossing B.
  • the learned model C created by using the trained model B and the supervised data from the camera C that has captured the level crossing C is the trained model C. Further, for an image obtained by imaging the level crossing D with the camera D, a learned model is not created.
  • FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. It is assumed that the camera 100 and the computer 200 can communicate with each other via the communication network 300.
  • the camera 100 in the present invention is an imaging device capable of measuring the distance to a subject.
  • the method of measuring the distance to the subject in the case where the subject can be simultaneously imaged from a plurality of different directions in addition to acquisition from the sensor or the like of the camera 100, the deviation of the image captured by each of the plurality of cameras It is also possible to measure the distance by learning the length and the actual distance. Also, it is possible to calculate the imaging angle using the measured distance. Furthermore, when the location of the camera 100 is fixed, the distance to the imaging location may be explicitly specified.
  • the imaging angle the number of times the camera 100 is inclined from the horizontal direction is taken as the imaging angle.
  • the imaging angle of the camera 100 is 30 degrees, and the imaging position, that is, the imaging distance is 5-6 m.
  • an imaging condition an imaging angle and an imaging position were taken as an example, but in addition to this, in the case of entrance detection to a level crossing, the presence or absence of an alarm, the presence or absence of a circuit breaker, and whether a line is a single or double , Etc., may be included to help determine the degree of similarity between the unknown image and the learned model.
  • the camera 100 transmits imaging data, which is an unknown image, to the computer 200 (step S02), and the acquisition module 211 of the computer 200 acquires the unknown image (step S03).
  • the acquisition module 211 acquires imaging conditions such as an imaging angle and an imaging position from the camera 100 together with the unknown image.
  • a flow for transmitting imaging data which is an unknown image from the camera 100 has been described, but the acquisition module 211 instructs the camera 100 to transmit imaging data, and the camera 100 transmits imaging data upon receiving the instruction.
  • the acquisition module 211 may acquire not only images acquired in real time by the camera 100 but also images acquired by the camera 100 in the past and stored in the storage unit 130. .
  • the selection module 212 of the computer 200 selects, from among the learned models stored in step S01, a learned model whose imaging condition is similar to the unknown image acquired in step S03 (step S04).
  • the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model B (step S05).
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • the learned model B is selected as the learned model whose imaging condition is similar to the level crossing D imaged by the camera D
  • the learned model B as the used model is displayed in the column of the camera D in FIG.
  • the table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and ⁇ are used as parameters.
  • BBB, b and ⁇ are used as parameters.
  • the field of the teacher data may be left blank.
  • the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S06).
  • image display results such as displaying the result with a warning sound or light as well as displaying the image Output shall be made in accordance with the purpose of the provided system.
  • image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition.
  • the present invention from among a plurality of machine-trained learned patterns in which artificial intelligence has analyzed a known image, a learned model of a known image whose imaging condition is similar to an unknown image to be newly analyzed It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time by selecting and using Become.
  • FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 also includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240.
  • the control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device that includes an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400.
  • a web camera is illustrated as an example, but an imaging device provided with necessary functions such as a digital camera, digital video, a camera mounted on an unmanned aerial vehicle, a camera of a wearable device, a security camera, an on-vehicle camera, and a 360 degree camera It may be.
  • the captured image may be stored in the storage unit 130.
  • the camera 100 includes a lens, an imaging element, various buttons, an imaging device such as a flash as an imaging unit 10, and captures an image as a captured image such as a moving image or a still image. Further, an image obtained by imaging is a precise image having an amount of information necessary for image analysis. Further, the resolution at the time of imaging, the camera angle, the camera magnification, and the like may be designated.
  • the control unit 110 includes a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), and the like.
  • CPU central processing unit
  • RAM random access memory
  • ROM read only memory
  • a device for enabling communication with other devices as the communication unit 120 for example, an IMT-2000 standard such as a WiFi (Wireless Fidelity) compliant device compliant with IEEE 802.11 or a third generation or fourth generation mobile communication system
  • IMT-2000 standard such as a WiFi (Wireless Fidelity) compliant device compliant with IEEE 802.11 or a third generation or fourth generation mobile communication system
  • WiFi Wireless Fidelity
  • a compliant wireless device is provided. It may be a wired LAN connection.
  • the storage unit 130 includes a storage unit of data using a hard disk or a semiconductor memory, and stores captured images, necessary data such as imaging conditions, and the like.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electric appliances such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. And wearable terminals such as smart glasses and head mounted displays.
  • the control unit 210 includes a CPU, a RAM, a ROM, and the like.
  • the control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
  • a device for enabling communication with other devices as the communication unit 220 for example, a wireless device compliant with IEEE 802.11 or a wireless device compliant with IMT-2000 such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • the storage unit 230 includes a storage unit of data using a hard disk or a semiconductor memory, and stores data necessary for processing of a captured image, teacher data, an image analysis result, and the like.
  • the storage unit 230 implements the storage module 231 in cooperation with the control unit 210.
  • the storage unit 230 may include a database of learned models.
  • the input / output unit 240 has a function necessary to use the image analysis result providing system.
  • the input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230.
  • As an example for realizing the input it is possible to provide a liquid crystal display for realizing a touch panel function, a keyboard, a mouse, a pen tablet, hardware buttons on the device, a microphone for performing voice recognition, and the like.
  • a form such as a liquid crystal display, a display of a PC, a display such as a projection on a projector, and an audio output can be considered.
  • the present invention is not particularly limited in function by the input / output method.
  • FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result. The processing executed by each module described above will be described along with this processing.
  • the storage module 231 of the computer 200 stores a plurality of learned models in the storage unit 230 (step S301).
  • the learned model may be acquired from another computer or storage medium, or may be created by the computer 200. Further, the storage unit 230 may be provided with a dedicated database for storing the learned model. The process of step S301 may be skipped if there is no new learned model, if a plurality of learned models have already been stored.
  • FIG. 10 is a diagram showing an example of a mathematical expression and parameters calculated by machine learning and an image used for machine learning as a learned model of level crossing A, level crossing B and level crossing C. In addition, it is a figure which shows an example of the unknown image of the level crossing D in which the learning completed model is not yet created.
  • FIG. 12 is an example of a table showing a data structure of a learned model for each camera.
  • a learned model is a model in which an equation for analyzing an image of a subject is associated with a parameter for each camera.
  • the image file with supervised data used in machine learning for calculating a learned model may be associated.
  • an imaging condition for each camera an imaging angle and an imaging position may be associated and stored.
  • the learned model created using the supervised data by the camera A which captured the crossing A in FIG. 10 is the learned model created using the supervised data by the camera B which captured the learned model A and the crossing B.
  • the learned model C created by using the trained model B and the supervised data from the camera C that has captured the level crossing C is the trained model C. Further, for an image obtained by imaging the level crossing D with the camera D, a learned model is not created.
  • FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. It is assumed that the camera 100 and the computer 200 can communicate with each other via the communication network 300.
  • the camera 100 in the present invention is an imaging device capable of measuring the distance to a subject.
  • the method of measuring the distance to the subject in the case where the subject can be simultaneously imaged from a plurality of different directions in addition to acquisition from the sensor or the like of the camera 100, the deviation of the image captured by each of the plurality of cameras It is also possible to measure the distance by learning the length and the actual distance. Also, it is possible to calculate the imaging angle using the measured distance. Furthermore, when the location of the camera 100 is fixed, the distance to the imaging location may be explicitly specified.
  • the imaging angle the number of times the camera 100 is inclined from the horizontal direction is taken as the imaging angle.
  • the imaging angle of the camera 100 is 30 degrees, and the imaging position, that is, the imaging distance is 5-6 m.
  • an imaging condition an imaging angle and an imaging distance were taken as an example, but in addition to this, in the case of entrance detection to a level crossing, the presence or absence of an alarm, the presence or absence of a circuit breaker, and whether the line is a single line or a double line , Etc., may be included to help determine the degree of similarity between the unknown image and the learned model.
  • the acquisition module 211 of the computer 200 requests the camera 100 to transmit an image (step 302). If there is no learned model for the image of the camera 100 at the time of the transmission request, the image acquired from the camera 100 is an unknown image.
  • the camera 100 performs imaging with the imaging unit 10 (step S303).
  • the camera 100 transmits imaging data, which is an unknown image, to the computer 200 via the communication unit 120 (step S304).
  • the acquisition module 211 of the computer 200 acquires an unknown image (step S305).
  • imaging conditions such as an imaging angle and an imaging position are acquired from the camera 100 together with the unknown image.
  • the acquisition module 211 may acquire an image captured by the camera 100 in the past and stored in the storage unit 130, in addition to acquiring an image captured by the camera 100 in real time.
  • the selection module 212 of the computer 200 selects a learned model having similar imaging conditions to the unknown image acquired in step S305 from the learned models stored in step S301 (step S306).
  • the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model B (step S307).
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • the learned model B is selected as a learned model whose imaging conditions are similar to the level crossing D imaged by the camera D
  • the learned model B as a use model is selected in the column of camera D in FIG.
  • the table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and ⁇ are used as parameters.
  • BBB, b and ⁇ are used as parameters.
  • the field of the teacher data may be left blank.
  • the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S308).
  • image display results such as displaying the result with a warning sound or light as well as displaying the image Output shall be made in accordance with the purpose of the provided system.
  • image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition. Further, the provision of the image analysis result does not have to be limited to the output to the input / output unit 240 of the computer 200, and the output according to the system is performed such as outputting to other devices via the communication unit 220. I assume.
  • an image analysis result providing system capable of outputting an accurate image analysis result without spending a learning time by selecting and using a learned model of It becomes possible.
  • FIG. 4 is a view showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions in the case of performing a learned model creation process of an unknown image.
  • the control unit 210 of the computer 200 cooperates with the storage unit 230 to implement the creation module 214.
  • FIG. 5 is a flowchart of the camera 100 and the computer 200 in the case of performing a learned model creation process of an unknown image. The processing executed by each module described above will be described along with this processing.
  • the processes in steps S501 to S503 in FIG. 5 correspond to the processes in steps S301 to S303 in FIG.
  • the process of step S501 may be skipped if a plurality of learned models are already stored and if a new learned model does not exist.
  • the camera 100 transmits the unknown image captured by the imaging unit 10 to the computer 200 via the communication unit 120 (step S504).
  • the communication unit 120 it is desirable to acquire as many unknown images as possible captured by the camera 100. Therefore, not only an image captured by the camera 100 in real time, but also an image captured by the camera 100 in the past and stored in the storage unit 130 may be transmitted.
  • the acquisition module 211 of the computer 200 acquires a plurality of unknown images (step S505).
  • imaging conditions such as an imaging angle and an imaging position are acquired from the camera 100 together with the respective unknown images.
  • the creation module 214 of the computer 200 assigns teacher data to the unknown image acquired in step S505 (step S506).
  • teacher data an operation of adding a label that is a correct answer of the image analysis result to a plurality of acquired unknown images is assigned as teacher data.
  • labels for supervised learning assuming that it is necessary to provide detailed image analysis results in actuality, if there is entry detection at a level crossing, “with entry / without entry” There is a possibility of no entry, entry (adult), entry (adult), entry (child), entry (old man), entry (vehicle), entry (bicycle), entry (animal), entry It is necessary to add in accordance with the purpose of the system, considering whether detailed division is necessary.
  • the creation module 214 of the computer 200 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S507).
  • the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S507 (step S508).
  • the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S509).
  • FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
  • the learned model D of the camera D created in step S508 is described in the column of the camera D in FIG.
  • the table is filled, assuming that a learned model D as a use model of the camera D, vvvvvvv as a mathematical expression, and DDD, d, dD as parameters are used.
  • the teacher data column also describes teacher data. From this point onward, it is possible to perform image analysis of the camera D using the table of FIG. 14 until the training data is increased again by the camera D to create a learned model.
  • the system should not be stressed so as not to affect other image analysis tasks. It is necessary to properly execute the system according to the operation of the image analysis result provision system, such as execution.
  • a known image whose imaging condition is similar to an unknown image to be newly analyzed
  • machine learning in accordance with the new unknown image is performed by learning-made model creation processing.
  • FIG. 6 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions when the image analysis process is switched depending on whether the learned model creation process of the unknown image is finished.
  • FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible. It is a flowchart figure of preparation processing.
  • FIG. 8 is a process corresponding to B of the flowchart of FIG. 6, and when the learned model creation process of the unknown image is not finished, the learned model selection process performed by the computer 200 when the machine learning of the unknown image is impossible.
  • FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible.
  • It is a flowchart figure of preparation processing.
  • step S601 to step S605 of FIG. 6 correspond to the processes of step S301 to step S305 of FIG. 3, step S606 and subsequent steps will be described.
  • the creation module 214 of the computer 200 confirms whether the learned model of the unknown image has been created for the imaging data acquired in step S605 (step S606).
  • the imaging data acquired in step S605 is acquisition data from the camera 100 for the first time, it is an unknown image, so it is considered that a learned model of the unknown image is not created.
  • the creation module 214 determines whether machine learning is possible using the unknown image acquired in step S605 or the previously stored unknown image (Ste S607).
  • the determination may be made according to the operation status of the image analysis result providing system, and the determination according to the system is performed.
  • step S608 the process proceeds to the flowchart of process A of FIG. 7 (step S608).
  • the creation module 214 of the computer 200 adds teacher data to the unknown image acquired in step S605 or the unknown image stored before that (step S701).
  • teacher data an operation of adding a label that is a correct answer of the image analysis result to a plurality of acquired unknown images is assigned as teacher data.
  • the creating module 214 of the computer 200 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S702).
  • the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S702 (step S703).
  • the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S704).
  • FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
  • the learned model D of the camera D created in step S703 is described in the column of the camera D in FIG.
  • the table is filled, assuming that a learned model D as a use model of the camera D, vvvvvvv as a mathematical expression, and DDD, d, dD as parameters are used.
  • the teacher data column also describes teacher data. From this point onward, it is possible to perform image analysis of the camera D using the table of FIG. 14 until the training data is increased again by the camera D to create a learned model.
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the created learned model D (step S705).
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S706). Thereafter, the process returns to the flowchart of FIG. 6 and proceeds to step S614.
  • the creation module 214 stores the unknown image acquired in step S605 in the storage unit 230 (step S609). This is for later use as teacher data when performing machine learning for learned model creation processing on an unknown image.
  • step S609 After the storage process of step S609, the process proceeds to the flowchart of process B of FIG. 8 (step S610).
  • the selection module 212 of the computer 200 selects a learned model whose imaging condition is similar to the unknown image acquired in step S605 from the learned models stored in step S601 (step S801).
  • the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
  • the image analysis module 213 performs image analysis of the unknown image captured by the camera D using the learned model B (step S802).
  • FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D.
  • the learned model B is selected as a learned model having similar imaging conditions to the level crossing D captured by the camera D. Therefore, in the field of the camera D in FIG.
  • the table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and ⁇ are used as parameters.
  • BBB, b and ⁇ are used as parameters.
  • the field of the teacher data may be left blank.
  • the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
  • the provision module 241 provides the input / output unit 240 of the computer 200 with the image analysis result (step S803). Thereafter, the process returns to the flowchart of FIG. 6 and proceeds to step S614.
  • the selection module 212 selects and applies the learned model D created in step S703 of process A (step S611).
  • the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model D (step S612).
  • the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S613).
  • step S706, step S803, and step S613, the provision module 241 provides the image analysis result to the input / output unit 240 of the computer 200.
  • image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition. Further, the provision of the image analysis result does not have to be limited to the output to the input / output unit 240 of the computer 200, and the output according to the system is performed such as outputting to other devices via the communication unit 220. I assume.
  • step S614 it is confirmed whether the image analysis result provision process may be ended. If not completed, the process returns to step S602 to continue the process, and when completed, the image analysis result provision process is ended. Do.
  • a known image whose imaging condition is similar to an unknown image to be newly analyzed
  • machine learning in accordance with the new unknown image is performed by learning-made model creation processing. It is possible to create a trained model.
  • FIG. 9 is a flowchart of the computer 200 in the case of performing machine learning of an unknown image by adding an image analysis result of the unknown image as teacher data when performing a learned model creation process of the unknown image.
  • the configurations of the camera 100 and the computer 200 are the same as in FIG. In FIG. 9, although described from step S901, before this, processing equivalent to step S301 to step S308 in FIG. 3 is performed, and image analysis of an unknown image by the selected learned model is performed, It is assumed that the image analysis result has been obtained.
  • the creation module 214 of the computer 200 assigns the image analysis result in step S307 as teacher data to the unknown image acquired in step S305 (step S901).
  • the cost of manually adding teacher data to be correct data to a large amount of images necessary for machine learning is It will be possible to reduce significantly.
  • the creating module 214 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S902).
  • the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S902 (step S903).
  • the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S904).
  • a known image whose imaging condition is similar to an unknown image to be newly analyzed
  • the image analysis result of the unknown image by the selected learned model is directly used as teaching data.
  • the above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program.
  • the program may be provided, for example, from a computer via a network (SaaS: software as a service), a flexible disk, a CD (CD-ROM, etc.), a DVD (DVD-ROM, DVD) Provided in the form of being recorded in a computer readable recording medium such as a RAM, a compact memory, etc.
  • the computer reads the program from the recording medium, transfers the program to an internal storage device or an external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.

Abstract

[Problem] To provide a system for providing an image analysis result, a method for providing an image analysis result, and a program with which it is possible to output a highly accurate image analysis result for an unknown image that needs to be newly analyzed, without taking learning time. [Solution] The present invention is provided with: a storage module 231 for storing a learned model having finished machine learning in which a known image is image-analyzed; an acquisition module 211 for acquiring an unknown image for which no learned model has yet been created; a selection module 212 for selecting, from among the stored learned models, the learned model of a known image that is similar in imaging conditions to the acquired unknown image; an image analysis module 213 for using the selected learned model to image-analyze the unknown image; and a provision module 241 for providing the result of the image analysis.

Description

画像解析結果提供システム、画像解析結果提供方法、およびプログラムImage analysis result providing system, image analysis result providing method, and program
 本発明は、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力することが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムに関する。 The present invention selects a learned model of a known image whose imaging condition is similar to an unknown image to be newly subjected to image analysis from among a plurality of patterns of machine-learned learned models in which artificial intelligence has image-analyzed a known image. The present invention relates to an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time.
 人物画像に対して画像解析処理を行うことで、誰が写っているかを判別し、自動的に人物画像をカテゴライズする仕組みを提供する方法が提案されている(特許文献1)。 There has been proposed a method of providing a mechanism for automatically classifying a person image by determining who is shown by performing an image analysis process on the person image (Patent Document 1).
特開2015-69580JP 2015-69580
 また、人工知能が画像解析を行うための機械学習の手法として、教師あり学習(Supervised Learning)はよく知られる手法である。 Supervised learning is a well-known method as a machine learning method for artificial intelligence to perform image analysis.
 しかしながら、この教師あり学習のためには、一般的に数万枚~数百万枚以上の大量の画像を用意して、画像に対して正しい教師データを付加してから学習させる必要があり、教師あり学習で画像解析の精度を上げるためには、学習のための画像を準備する手間がかかるとともに、学習のための時間も長期間必要となるという点が問題となる。 However, for this supervised learning, it is generally necessary to prepare tens of thousands of images or more and hundreds of millions of images, add correct teacher data to the images, and then learn. In order to raise the accuracy of image analysis in supervised learning, it takes time and effort to prepare images for learning, and it takes a long time to learn.
 例えば、踏切に人が進入したかどうかを判定する、進入検知を画像解析で行う場合、通常、監視カメラの位置は固定であるため、踏切A・踏切B・踏切C等、踏切毎に教師ありの機械学習を行うことで、踏切A・踏切B・踏切Cそれぞれに対する進入検知の精度を高めることが可能であり、その後の画像解析処理には、踏切Aの学習済みモデル・踏切Bの学習済みモデル・踏切Cの学習済みモデルをそれぞれ使用する。しかし、新たに踏切Dでの進入検知を行おうとした場合には、踏切Dのための教師あり学習の教師データとなるタグ付きの画像を大量に用意し、それを基に一から機械学習させるという手順が必要であるため、進入検知導入までの手間と時間がかかる。 For example, when the entrance detection is performed by image analysis to determine whether a person has entered a level crossing, the position of the monitoring camera is usually fixed. Therefore, there is a supervised at each level crossing such as level crossing A, level crossing B, level crossing C, etc. Machine learning, it is possible to improve the accuracy of entry detection for each of crossing A, crossing B, crossing C, and for the image analysis processing after that, the learned model of crossing A has been learned of crossing B Model and level crossing C learned models are used respectively. However, when it is intended to newly detect entry at level crossing D, a large number of images with tags to be supervised data for supervised learning for level crossing D are prepared and machine learning is performed from one on the basis of that. Because it is necessary to have the following procedure, it takes time and effort until entrance detection is introduced.
 この課題に対して、本発明者は、すでに機械学習済みの、踏切A・踏切B・踏切Cのなかから、踏切Dと撮像条件の似た踏切の学習済みモデルを選択して、利用することで、踏切Dにおいても、はじめからある程度以上の進入検知の画像解析精度を得られることに着目した。 In order to solve this problem, the inventor selects and uses a learned model of a level crossing similar to the level crossing D and the imaging condition from the level crossing A, the level crossing B, and the level crossing C already subjected to machine learning. So, also at the crossing D, we focused on the fact that the image analysis accuracy of the approach detection above a certain degree can be obtained from the beginning.
 本発明は、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力することが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することを目的とする。 The present invention selects a learned model of a known image whose imaging condition is similar to an unknown image to be newly subjected to image analysis from among a plurality of patterns of machine-learned learned models in which artificial intelligence has image-analyzed a known image. It is an object of the present invention to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、
 既知画像を画像解析した機械学習済みの学習済みモデルを記憶する記憶手段と、
 未だ学習済みモデルが作成されていない未知画像を取得する取得手段と、
 前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択する選択手段と、
 前記選択した学習済みモデルを利用して、前記未知画像を画像解析させる画像解析手段と、
 前記画像解析の結果を提供する提供手段と、
を備えることを特徴とする画像解析結果提供システムを提供する。
The invention according to the first feature is
Storage means for storing a machine-learned learned model obtained by image analysis of a known image;
Acquisition means for acquiring an unknown image for which a learned model has not yet been created;
Selecting means for selecting a learned model of the known image having similar imaging conditions to the acquired unknown image from the stored learned models;
Image analysis means for performing image analysis of the unknown image using the selected learned model;
Providing means for providing the result of the image analysis;
Providing an image analysis result providing system.
 第1の特徴に係る発明によれば、既知画像を画像解析した機械学習済みの学習済みモデルを記憶する記憶手段と、未だ学習済みモデルが作成されていない未知画像を取得する取得手段と、前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択する選択手段と、前記選択した学習済みモデルを利用して、前記未知画像を画像解析させる画像解析手段と、前記画像解析の結果を提供する提供手段と、を備える。 According to the first aspect of the present invention, there is provided storage means for storing a machine-learned learned model obtained by image analysis of a known image, acquisition means for acquiring an unknown image for which a learned model has not yet been created, and The unknown image is selected using selection means for selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models, and using the selected learned model. An image analysis means for analyzing an image, and a providing means for providing a result of the image analysis.
 第1の特徴に係る発明は、画像解析結果提供システムのカテゴリであるが、画像解析結果提供方法、およびプログラムであっても同様の作用、効果を奏する。 The invention according to the first aspect is a category of an image analysis result providing system, but the same operation and effect can be obtained even with an image analysis result providing method and a program.
 第2の特徴に係る発明は、第1の特徴に係る発明である画像解析結果提供システムであって、
 前記撮像条件が、撮像対象に対する撮像位置および撮像角度、であることを特徴とする画像解析結果提供システムを提供する。
An invention according to a second feature is an image analysis result providing system according to the first feature,
The image analysis result providing system is characterized in that the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
 第2の特徴に係る発明によれば、第1の特徴に係る発明である画像解析結果提供システムにおいて、前記撮像条件は、撮像対象に対する撮像位置および撮像角度、である。 According to the invention of the second aspect, in the image analysis result providing system according to the first aspect, the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
 第3の特徴に係る発明は、第1の特徴又は第2の特徴に係る発明である画像解析結果提供システムであって、
 前記学習済みモデルが、前記機械学習により算出した前記画像解析に使用する数式およびパラメータと前記機械学習に使用した画像であることを特徴とする画像解析結果提供システムを提供する。
An invention according to a third feature is an image analysis result providing system according to the first feature or the second feature, wherein
The image analysis result providing system is characterized in that the learned model is a mathematical expression and parameters used for the image analysis calculated by the machine learning and an image used for the machine learning.
 第3の特徴に係る発明によれば、第1の特徴又は第2の特徴に係る発明である画像解析結果提供システムにおいて、前記学習済みモデルは、前記機械学習により算出した前記画像解析に使用する数式およびパラメータと前記機械学習に使用した画像である。 According to the third aspect of the present invention, in the image analysis result providing system according to the first or second aspect, the learned model is used for the image analysis calculated by the machine learning. They are an equation and parameters and an image used for the machine learning.
 第4の特徴に係る発明は、第1の特徴から第3の特徴のいずれかに係る発明である画像解析結果提供システムであって、
 前記未知画像に対して、画像解析した新しい機械学習済みの学習済みモデルを作成する作成手段を備えることを特徴とする画像解析結果提供システムを提供する。
An invention according to a fourth feature is an image analysis result providing system according to any of the first feature through the third feature, wherein
There is provided an image analysis result providing system including: a creation unit configured to create a new machine-learned learned model subjected to image analysis for the unknown image.
 第4の特徴に係る発明によれば、第1の特徴から第3の特徴のいずれかに係る発明である画像解析結果提供システムにおいて、前記未知画像に対して、画像解析した新しい機械学習済みの学習済みモデルを作成する作成手段を備える。 According to the invention as set forth in the fourth feature, in the image analysis result providing system which is the invention as set forth in any one of the first feature to the third feature, new machine learning has already been performed in which the unknown image is analyzed It comprises a creation means for creating a learned model.
 第5の特徴に係る発明は、第4の特徴に係る発明である画像解析結果提供システムであって、
 前記画像解析手段は、前記新しい機械学習済みの学習済みモデルを作成するまでの期間に、前記選択された学習済みモデルを利用して、前記未知画像を画像解析させることを特徴とする画像解析結果提供システムを提供する。
An invention according to a fifth feature is an image analysis result providing system according to the fourth feature, wherein
The image analysis means characterized in that the image analysis of the unknown image is performed using the selected learned model in a period until the new machine-learned learned model is created. Provide a provision system.
 第5の特徴に係る発明によれば、第4の特徴に係る発明である画像解析結果提供システムにおいて、前記画像解析手段は、前記新しい機械学習済みの学習済みモデルを作成するまでの期間に、前記選択された学習済みモデルを利用して、前記未知画像を画像解析させる。 According to the invention as set forth in the fifth feature, in the image analysis result providing system according to the invention as set forth in the fourth feature, the image analysis means is in a period until creating the new machine-learned learned model. The unknown image is subjected to image analysis using the selected learned model.
 第6の特徴に係る発明は、第1の特徴から第5の特徴のいずれかに係る発明である画像解析結果提供システムであって、
 前記未知画像に対して、前記選択した学習済みモデルを利用して、前記未知画像を画像解析した解析結果を教師データとして、前記未知画像を画像解析した新しい機械学習済みの学習済みモデルを作成する作成手段を備えることを特徴とする画像解析結果提供システムを提供する。
An invention according to a sixth feature is an image analysis result providing system according to any of the first feature through the fifth feature, wherein
Creating a new machine-learned learned model in which the unknown image is image-analyzed using the analysis result of the image analysis of the unknown image as teacher data using the selected learned model for the unknown image There is provided an image analysis result providing system characterized by including a creating unit.
 第6の特徴に係る発明によれば、第1の特徴から第5の特徴のいずれかに係る発明である画像解析結果提供システムにおいて、前記未知画像に対して、前記選択した学習済みモデルを利用して、前記未知画像を画像解析した解析結果を教師データとして、前記未知画像を画像解析した新しい機械学習済みの学習済みモデルを作成する作成手段を備える。 According to the invention as set forth in the sixth aspect, in the image analysis result provision system according to any one of the first to fifth aspects, the selected learned model is used for the unknown image. And generating means for generating a new machine-learned learned model in which the unknown image is image-analyzed using the analysis result obtained by image-analyzing the unknown image as teacher data.
 第7の特徴に係る発明は、
 既知画像を画像解析した機械学習済みの学習済みモデルを記憶するステップと、
 未だ学習済みモデルが作成されていない未知画像を取得するステップと、
 前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択するステップと、
 前記選択した学習済みモデルを利用して、前記未知画像を画像解析させるステップと、
 前記画像解析の結果を提供するステップと、
を備える画像解析結果提供方法を提供する。
The invention according to the seventh feature is
Storing a machine-learned learned model obtained by image analysis of a known image;
Acquiring an unknown image for which a learned model has not yet been created;
Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models;
Analyzing the unknown image using the selected learned model;
Providing the result of the image analysis;
Providing an image analysis result providing method.
 第8の特徴に係る発明は、
 画像解析結果提供システムに、
 既知画像を画像解析した機械学習済みの学習済みモデルを記憶するステップ、
 未だ学習済みモデルが作成されていない未知画像を取得するステップ、
 前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択するステップ、
 前記選択した学習済みモデルを利用して、前記未知画像を画像解析させるステップ、
 前記画像解析の結果を提供するステップ、
を実行させるためのプログラムを提供する。
The invention according to the eighth feature is
Image analysis result provision system,
Storing a machine-learned learned model obtained by image analysis of known images;
Acquiring an unknown image for which a learned model has not yet been created;
Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models;
Analyzing the unknown image using the selected learned model;
Providing the result of the image analysis;
Provide a program to run the program.
 本発明によれば、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力することが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することが可能となる。 According to the present invention, from among a plurality of machine-trained learned patterns in which artificial intelligence has analyzed a known image, a learned model of a known image whose imaging condition is similar to an unknown image to be newly analyzed It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time by selecting and using Become.
図1は、本発明の好適な実施形態の概要図である。FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. 図2は、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions. 図3は、カメラ100から未知画像を取得し、コンピュータ200で画像解析処理を行い、画像解析結果を提供する場合のフローチャート図である。FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result. 図4は、未知画像の学習済みモデル作成処理を行う場合の、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 4 is a view showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions in the case of performing a learned model creation process of an unknown image. 図5は、未知画像の学習済みモデル作成処理を行う場合の、カメラ100とコンピュータ200のフローチャート図である。FIG. 5 is a flowchart of the camera 100 and the computer 200 in the case of performing a learned model creation process of an unknown image. 図6は、未知画像の学習済みモデル作成処理が終わっているかに応じて、画像解析処理を切り替える場合の、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 6 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions when the image analysis process is switched depending on whether the learned model creation process of the unknown image is finished. 図7は、図6のフローチャートのAに当たる処理で、未知画像の学習済みモデル作成処理が終わっていない場合に、未知画像の機械学習が可能である場合にコンピュータ200で行う未知画像の学習済みモデル作成処理のフローチャート図である。FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible. It is a flowchart figure of preparation processing. 図8は、図6のフローチャートのBに当たる処理で、未知画像の学習済みモデル作成処理が終わっていない場合に、未知画像の機械学習が不可能である場合にコンピュータ200で行う学習済みモデル選択処理のフローチャート図である。FIG. 8 is a process corresponding to B of the flowchart of FIG. 6, and when the learned model creation process of the unknown image is not finished, the learned model selection process performed by the computer 200 when the machine learning of the unknown image is impossible. FIG. 図9は、未知画像の学習済みモデル作成処理を行う場合に、未知画像の画像解析結果を教師データとして付与して、未知画像の機械学習を行う場合の、コンピュータ200のフローチャート図である。FIG. 9 is a flowchart of the computer 200 in the case of performing machine learning of an unknown image by adding an image analysis result of the unknown image as teacher data when performing a learned model creation process of the unknown image. 図10は、踏切A・踏切B・踏切Cの学習済みモデルとして、機械学習により算出した数式およびパラメータの例と、機械学習に使用した画像の例と、未知画像である踏切Dの画像の一例を示す図である。FIG. 10 shows, as learned models of level crossing A, level crossing B and level crossing C, examples of mathematical expressions and parameters calculated by machine learning, examples of images used for machine learning, and examples of images of level crossing D which are unknown images FIG. 図11は、カメラ100とコンピュータ200、被写体400との関係を、模式的に説明するための図である。FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. 図12は、カメラ毎の学習済みモデルのデータ構造を示す表の一例である。FIG. 12 is an example of a table showing a data structure of a learned model for each camera. 図13は、カメラDで撮像した未知画像の学習済みモデルが無い場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D. 図14は、カメラDで撮像した未知画像の学習済みモデルが作成された場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created.
 以下、本発明を実施するための最良の形態について図を参照しながら説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings. This is merely an example, and the technical scope of the present invention is not limited to this.
 [画像解析結果提供システムの概要]
 図1は、本発明の好適な実施形態の概要図である。この図1に基づいて、本発明の概要を説明する。画像解析結果提供システムは、カメラ100、コンピュータ200、通信網300から構成される。
[Overview of image analysis result provision system]
FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described based on FIG. The image analysis result providing system includes a camera 100, a computer 200, and a communication network 300.
 なお、図1において、カメラ100の数は1つに限らず複数であってもよい。また、コンピュータ200は、実在する装置に限らず、仮想的な装置であってもよい。 In FIG. 1, the number of cameras 100 is not limited to one, and may be plural. The computer 200 is not limited to an existing device, and may be a virtual device.
 カメラ100は、図2に示すように、撮像部10、制御部110、通信部120、記憶部130から構成される。また、コンピュータ200は、同じく図2に示すように、制御部210、通信部220、記憶部230、入出力部240、から構成される。制御部210は通信部220、記憶部230と協働して取得モジュール211を実現する。また、制御部210は記憶部230と協働して選択モジュール212、画像解析モジュール213、を実現する。記憶部230は、制御部210と協働して記憶モジュール231を実現する。入出力部240は、制御部210、記憶部230と協働して、提供モジュール241を実現する。通信網300は、インターネット等の公衆通信網でも専用通信網でもよく、カメラ100とコンピュータ200間の通信を可能とする。 As illustrated in FIG. 2, the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130. The computer 200 also includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240, as also shown in FIG. The control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213. The storage unit 230 implements the storage module 231 in cooperation with the control unit 210. The input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230. The communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
 カメラ100は、コンピュータ200とデータ通信可能な、撮像素子やレンズ等の撮像デバイスを備え、被写体400までの距離を測定可能な撮像装置である。ここでは、例としてWEBカメラを図示しているが、デジタルカメラ、デジタルビデオ、無人航空機に搭載したカメラ、ウェアラブルデバイスのカメラ、防犯カメラ、車載カメラ、360度カメラ等の必要な機能を備える撮像装置であってよい。また、記憶部130に撮像画像を保存可能としても良い。 The camera 100 is an imaging device that includes an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400. Here, a web camera is illustrated as an example, but an imaging device provided with necessary functions such as a digital camera, digital video, a camera mounted on an unmanned aerial vehicle, a camera of a wearable device, a security camera, an on-vehicle camera, and a 360 degree camera It may be. Further, the captured image may be stored in the storage unit 130.
 コンピュータ200は、カメラ100とデータ通信可能な計算装置である。ここでは、例としてデスクトップ型のコンピュータを図示しているが、携帯電話、携帯情報端末、タブレット端末、パーソナルコンピュータに加え、ネットブック端末、スレート端末、電子書籍端末、携帯型音楽プレーヤ等の電化製品や、スマートグラス、ヘッドマウントディスプレイ等のウェアラブル端末等であってよい。 The computer 200 is a computing device capable of data communication with the camera 100. Here, a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electric appliances such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. And wearable terminals such as smart glasses and head mounted displays.
 図1の画像解析結果提供システムにおいて、まず、コンピュータ200の記憶モジュール231は、記憶部230に複数の学習済みモデルを記憶する(ステップS01)。学習済みモデルは、他のコンピュータや記憶媒体から取得しても良いし、コンピュータ200で作成しても良い。また、記憶部230に専用のデータベースを設けても良い。 In the image analysis result providing system of FIG. 1, first, the storage module 231 of the computer 200 stores a plurality of learned models in the storage unit 230 (step S01). The learned model may be acquired from another computer or storage medium, or may be created by the computer 200. In addition, the storage unit 230 may be provided with a dedicated database.
 図10は、踏切A・踏切B・踏切Cの学習済みモデルとして、機械学習により算出した数式およびパラメータの例と、機械学習に使用した画像の例と、を示す図である。また、まだ学習済みモデルの作成されていない踏切Dの未知画像の一例を示す図である。 FIG. 10 is a diagram showing examples of mathematical expressions and parameters calculated by machine learning and examples of images used for machine learning as learned models of level crossing A, level crossing B and level crossing C. In addition, it is a figure which shows an example of the unknown image of the level crossing D in which the learning completed model is not yet created.
 図12は、カメラ毎の学習済みモデルのデータ構造を示す表の一例である。本発明において、学習済みモデルとは、カメラ毎に、被写体を画像解析するための数式とパラメータとを対応付けたものである。あわせて、学習済みモデルを算出するための機械学習に使用した、教師データ付きの画像ファイルを関連づけても良い。また、カメラ毎の撮像条件として、撮像角度と撮像位置を関連づけて記憶させても良い。図10の踏切Aを撮像したカメラAによる教師ありデータを使用して作成した学習済みモデルが学習済みモデルA、踏切Bを撮像したカメラBによる教師ありデータを使用して作成した学習済みモデルが学習済みモデルB、踏切Cを撮像したカメラCによる教師ありデータを使用して作成した学習済みモデルが学習済みモデルCである。また、カメラDで踏切Dを撮像した画像に対しては、学習済みモデルは未作成である。 FIG. 12 is an example of a table showing a data structure of a learned model for each camera. In the present invention, a learned model is a model in which an equation for analyzing an image of a subject is associated with a parameter for each camera. At the same time, the image file with supervised data used in machine learning for calculating a learned model may be associated. Further, as an imaging condition for each camera, an imaging angle and an imaging position may be associated and stored. The learned model created using the supervised data by the camera A which captured the crossing A in FIG. 10 is the learned model created using the supervised data by the camera B which captured the learned model A and the crossing B. The learned model C created by using the trained model B and the supervised data from the camera C that has captured the level crossing C is the trained model C. Further, for an image obtained by imaging the level crossing D with the camera D, a learned model is not created.
 図11は、カメラ100とコンピュータ200、被写体400との関係を、模式的に説明するための図である。カメラ100とコンピュータ200とは、通信網300を介して通信可能であるものとする。本発明でのカメラ100は、被写体までの距離を測定可能な撮像装置である。被写体までの距離を測定する方法については、カメラ100のセンサ等から取得する他に、被写体を複数の異なる方向から同時に撮像可能である場合には、その複数のカメラそれぞれで撮像した画像のズレの長さと、実際の距離を学習させて、距離を測定することも可能である。また、その測定した距離を用いて、撮像角度を算出することも可能である。更に、カメラ100の場所が固定である場合には、撮像場所までの距離を、明示的に指定できるようにしてもよい。また、撮像角度については、水平方向から、何度カメラ100が傾いているかを撮像角度とする。図11の例では、被写体400について、カメラ100の撮像角度は30度、撮像位置つまり撮像距離は5-6mである。ここでは、撮像条件として、撮像角度と撮像位置を例として挙げたが、このほかに、踏切への進入検知であれば、警報器の有無、遮断機の有無や形状、線路が単線か複線か、等の、未知画像と学習済みモデルの類似度の判定に役立つ情報を、含めて良いものとする。 FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. It is assumed that the camera 100 and the computer 200 can communicate with each other via the communication network 300. The camera 100 in the present invention is an imaging device capable of measuring the distance to a subject. As for the method of measuring the distance to the subject, in the case where the subject can be simultaneously imaged from a plurality of different directions in addition to acquisition from the sensor or the like of the camera 100, the deviation of the image captured by each of the plurality of cameras It is also possible to measure the distance by learning the length and the actual distance. Also, it is possible to calculate the imaging angle using the measured distance. Furthermore, when the location of the camera 100 is fixed, the distance to the imaging location may be explicitly specified. Further, with regard to the imaging angle, the number of times the camera 100 is inclined from the horizontal direction is taken as the imaging angle. In the example of FIG. 11, for the subject 400, the imaging angle of the camera 100 is 30 degrees, and the imaging position, that is, the imaging distance is 5-6 m. Here, as an imaging condition, an imaging angle and an imaging position were taken as an example, but in addition to this, in the case of entrance detection to a level crossing, the presence or absence of an alarm, the presence or absence of a circuit breaker, and whether a line is a single or double , Etc., may be included to help determine the degree of similarity between the unknown image and the learned model.
 記憶部230に複数の学習済みモデルを記憶する前段階として、教師あり学習による機械学習のためには、適切なラベルをつけた大量の画像を教師データとして用意して機械学習させる必要があり、また、学習のための時間も長期間必要となる。そのため、のちに新たな未知画像の解析を行う場合を想定して、カメラ毎の撮像条件に、幅を持たせた学習済みモデルを用意しておくことが好ましい。また、前記教師あり学習のためのラベルについても、実際にどの程度詳細な画像解析結果を提供する必要があるのかを想定して、踏切の進入検知であれば、「進入あり・進入なし」のみで良いのか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等、詳しい区分が必要なのかを考慮して、システムの目的にあわせて付加する必要がある。 As a preliminary step of storing a plurality of learned models in the storage unit 230, for machine learning by supervised learning, it is necessary to prepare a large number of images with appropriate labels as teaching data and perform machine learning. In addition, it takes a long time for learning. Therefore, it is preferable to prepare a learned model in which a width is given to imaging conditions for each camera on the assumption that a new unknown image is to be analyzed later. In addition, with regard to the label for supervised learning, assuming that it is necessary to actually provide a detailed image analysis result, in the case of entry detection of level crossing, only “with entry / without entry”. "No entry, there is an entry (adult), there is an entry (adult), there is an entry (child), there is an entry (old man), there is an entry (vehicle), there is an entry (bicycle), there is an entry (animal), there is a possibility of entry" It is necessary to add in accordance with the purpose of the system, considering whether detailed division is necessary.
 図1に戻り、カメラ100は未知画像である撮像データを、コンピュータ200に送信し(ステップS02)、コンピュータ200の取得モジュール211は、未知画像を取得する(ステップS03)。未知画像とあわせて、取得モジュール211は、撮像角度や撮像位置等の撮像条件を、あわせてカメラ100から取得するものとする。ここでは、カメラ100から未知画像である撮像データを送信するフローを記載したが、取得モジュール211がカメラ100に対して、撮像データの送信指示を行い、それを受けてカメラ100が撮像データの送信を行っても良い。また、取得モジュール211は、カメラ100がリアルタイムに撮像を行っている画像の取得を行うだけでなく、カメラ100が過去に撮像して記憶部130に保存しておいた画像を取得しても良い。 Returning to FIG. 1, the camera 100 transmits imaging data, which is an unknown image, to the computer 200 (step S02), and the acquisition module 211 of the computer 200 acquires the unknown image (step S03). The acquisition module 211 acquires imaging conditions such as an imaging angle and an imaging position from the camera 100 together with the unknown image. Here, a flow for transmitting imaging data which is an unknown image from the camera 100 has been described, but the acquisition module 211 instructs the camera 100 to transmit imaging data, and the camera 100 transmits imaging data upon receiving the instruction. You may In addition, the acquisition module 211 may acquire not only images acquired in real time by the camera 100 but also images acquired by the camera 100 in the past and stored in the storage unit 130. .
 次に、コンピュータ200の選択モジュール212は、ステップS01で記憶した学習済みモデルの中から、ステップS03で取得した未知画像と撮像条件が似ている学習済みモデルを選択する(ステップS04)。ここで、例えば、撮像された未知画像が、図10の踏切Dであった場合には、学習済みモデルが存在する踏切A、踏切B、踏切Cのいずれからの学習済みモデルから、撮像条件が似ているものを選択する。踏切Dを撮像したカメラDの撮像角度が20度、撮像距離が4-5mであるとし、更に踏切Dを撮像した画像の構図等を分析した結果、ここでは、学習済みモデルBを選択するものとする。 Next, the selection module 212 of the computer 200 selects, from among the learned models stored in step S01, a learned model whose imaging condition is similar to the unknown image acquired in step S03 (step S04). Here, for example, when the captured unknown image is the crossing D in FIG. 10, the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
 次に、コンピュータ200の画像解析モジュール213は、学習済みモデルBを使用して、カメラDで撮像された未知画像の画像解析を行う(ステップS05)。 Next, the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model B (step S05).
 図13は、カメラDで撮像した未知画像の学習済みモデルが無い場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。ステップS04で、カメラDで撮像した踏切Dと撮像条件が似ている学習済みモデルとして、学習済みモデルBを選択したので、図13のカメラDの欄には、使用モデルとして学習済みモデルB、数式としてyyyyyy、パラメータとしてBBB、b、βを使用するものとして、表を埋めている。ここで、撮像条件である撮像角度と撮像位置には、実際のカメラDでの撮像条件を埋めておくことが望ましい。また、カメラDでの撮像画像に対しては、教師あり学習を行っていないので、教師データの欄は空欄としておいてよいものとする。以降、カメラDの学習済みモデルが作成されるか、又は、他の学習済みモデルの増加により、選択モジュール212で再度カメラDに最も適した学習済みモデルの選択を行うまでは、この図13の表を利用して、カメラDの画像解析を行うことが可能である。 FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D. In step S04, since the learned model B is selected as the learned model whose imaging condition is similar to the level crossing D imaged by the camera D, the learned model B as the used model is displayed in the column of the camera D in FIG. The table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and β are used as parameters. Here, it is desirable to fill in the imaging conditions with the actual camera D at the imaging angle and the imaging position which are the imaging conditions. In addition, since supervised learning is not performed on an image captured by the camera D, the field of the teacher data may be left blank. After that, until the learned model of the camera D is created or the number of other learned models increases, the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
 最後に、コンピュータ200の提供モジュール241は、コンピュータ200の入出力部240に、画像解析結果の提供を行う(ステップS06)。踏切Dへの進入検知であれば、「進入あり・進入なし」のみか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等詳しく表示するのか、又は、画像に対して表示を行うだけでなく、警告音や光で結果を提示するのか等、画像解析結果提供システムの目的にあわせた出力を行うものとする。また、ここでは、踏切への進入検知を例として説明してきたが、画像解析は、例えば、個人判定用の顔認識、農作物の害虫被害状況の判別、倉庫内の在庫確認、医療診断用の患部画像認識、等、システムの目的に応じた適切なものに応用可能であるとする。 Finally, the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S06). In the case of entry detection to the railroad crossing D, only "with entry, no entry" or "without entry, with entry (adult), with entry (child), with entry (old man), with entry (vehicle), with entry (Bicycle) · There is an entry (animal) · There is a possibility of entry etc. Or image display results such as displaying the result with a warning sound or light as well as displaying the image Output shall be made in accordance with the purpose of the provided system. In addition, here, the detection of entrance to a level crossing has been described as an example, but image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition.
 本発明によれば、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力することが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することが可能となる。 According to the present invention, from among a plurality of machine-trained learned patterns in which artificial intelligence has analyzed a known image, a learned model of a known image whose imaging condition is similar to an unknown image to be newly analyzed It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending learning time by selecting and using Become.
 [各機能の説明]
 図2は、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。カメラ100は、撮像部10、制御部110、通信部120、記憶部130から構成される。また、コンピュータ200は、制御部210、通信部220、記憶部230、入出力部240、から構成される。制御部210は通信部220、記憶部230と協働して取得モジュール211を実現する。また、制御部210は記憶部230と協働して選択モジュール212、画像解析モジュール213、を実現する。記憶部230は、制御部210と協働して記憶モジュール231を実現する。入出力部240は、制御部210、記憶部230と協働して、提供モジュール241を実現する。通信網300は、インターネット等の公衆通信網でも専用通信網でもよく、カメラ100とコンピュータ200間の通信を可能とする。
[Description of each function]
FIG. 2 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions. The camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130. The computer 200 also includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240. The control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213. The storage unit 230 implements the storage module 231 in cooperation with the control unit 210. The input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230. The communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
 カメラ100は、コンピュータ200とデータ通信可能な、撮像素子やレンズ等の撮像デバイスを備え、被写体400までの距離を測定可能な撮像装置である。ここでは、例としてWEBカメラを図示しているが、デジタルカメラ、デジタルビデオ、無人航空機に搭載したカメラ、ウェアラブルデバイスのカメラ、防犯カメラ、車載カメラ、360度カメラ等の必要な機能を備える撮像装置であってよい。また、記憶部130に撮像画像を保存可能としても良い。 The camera 100 is an imaging device that includes an imaging device such as an imaging element and a lens that can perform data communication with the computer 200 and can measure the distance to the subject 400. Here, a web camera is illustrated as an example, but an imaging device provided with necessary functions such as a digital camera, digital video, a camera mounted on an unmanned aerial vehicle, a camera of a wearable device, a security camera, an on-vehicle camera, and a 360 degree camera It may be. Further, the captured image may be stored in the storage unit 130.
 カメラ100は、撮像部10として、レンズ、撮像素子、各種ボタン、フラッシュ等の撮像デバイス等を備え、動画や静止画等の撮像画像として撮像する。また、撮像して得られる画像は、画像解析に必要なだけの情報量を持った精密な画像であるものする。また、撮像時の解像度、カメラ角度、カメラ倍率、等を指定可能であるものとしてもよい。 The camera 100 includes a lens, an imaging element, various buttons, an imaging device such as a flash as an imaging unit 10, and captures an image as a captured image such as a moving image or a still image. Further, an image obtained by imaging is a precise image having an amount of information necessary for image analysis. Further, the resolution at the time of imaging, the camera angle, the camera magnification, and the like may be designated.
 制御部110として、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)等を備える。 The control unit 110 includes a central processing unit (CPU), a random access memory (RAM), a read only memory (ROM), and the like.
 通信部120として、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWiFi(Wireless Fidelity)対応デバイス又は第3世代、第4世代移動通信システム等のIMT-2000規格に準拠した無線デバイス等を備える。有線によるLAN接続であってもよい。 A device for enabling communication with other devices as the communication unit 120, for example, an IMT-2000 standard such as a WiFi (Wireless Fidelity) compliant device compliant with IEEE 802.11 or a third generation or fourth generation mobile communication system A compliant wireless device is provided. It may be a wired LAN connection.
 記憶部130として、ハードディスクや半導体メモリによる、データのストレージ部を備え、撮像画像や、撮像条件等の必要なデータ等を記憶する。 The storage unit 130 includes a storage unit of data using a hard disk or a semiconductor memory, and stores captured images, necessary data such as imaging conditions, and the like.
 コンピュータ200は、カメラ100とデータ通信可能な計算装置である。ここでは、例としてデスクトップ型のコンピュータを図示しているが、携帯電話、携帯情報端末、タブレット端末、パーソナルコンピュータに加え、ネットブック端末、スレート端末、電子書籍端末、携帯型音楽プレーヤ等の電化製品や、スマートグラス、ヘッドマウントディスプレイ等のウェアラブル端末等であってよい。 The computer 200 is a computing device capable of data communication with the camera 100. Here, a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electric appliances such as a netbook terminal, a slate terminal, an electronic book terminal, a portable music player, etc. And wearable terminals such as smart glasses and head mounted displays.
 制御部210として、CPU、RAM、ROM等を備える。制御部210は通信部220、記憶部230と協働して取得モジュール211を実現する。また、制御部210は記憶部230と協働して選択モジュール212、画像解析モジュール213、を実現する。 The control unit 210 includes a CPU, a RAM, a ROM, and the like. The control unit 210 cooperates with the communication unit 220 and the storage unit 230 to realize the acquisition module 211. Further, the control unit 210 cooperates with the storage unit 230 to implement the selection module 212 and the image analysis module 213.
 通信部220として、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWiFi対応デバイス又は第3世代、第4世代移動通信システム等のIMT-2000規格に準拠した無線デバイス等を備える。有線によるLAN接続であってもよい。 A device for enabling communication with other devices as the communication unit 220, for example, a wireless device compliant with IEEE 802.11 or a wireless device compliant with IMT-2000 such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
 記憶部230として、ハードディスクや半導体メモリによる、データのストレージ部を備え、撮像画像や、教師データ、画像解析結果、等の処理に必要なデータ等を記憶する。記憶部230は、制御部210と協働して記憶モジュール231を実現する。また、記憶部230に、学習済みモデルのデータベースを備えても良い。 The storage unit 230 includes a storage unit of data using a hard disk or a semiconductor memory, and stores data necessary for processing of a captured image, teacher data, an image analysis result, and the like. The storage unit 230 implements the storage module 231 in cooperation with the control unit 210. In addition, the storage unit 230 may include a database of learned models.
 入出力部240は、画像解析結果提供システムを利用するために必要な機能を備えるものとする。入出力部240は、制御部210、記憶部230と協働して、提供モジュール241を実現する。入力を実現するための例として、タッチパネル機能を実現する液晶ディスプレイ、キーボード、マウス、ペンタブレット、装置上のハードウェアボタン、音声認識を行うためのマイク等を備えることが可能である。また、出力を実現するための例として、液晶ディスプレイ、PCのディスプレイ、プロジェクターへの投影等の表示と音声出力等の形態が考えられる。入出力方法により、本発明は特に機能を限定されるものではない。 The input / output unit 240 has a function necessary to use the image analysis result providing system. The input / output unit 240 implements the provision module 241 in cooperation with the control unit 210 and the storage unit 230. As an example for realizing the input, it is possible to provide a liquid crystal display for realizing a touch panel function, a keyboard, a mouse, a pen tablet, hardware buttons on the device, a microphone for performing voice recognition, and the like. Further, as an example for realizing the output, a form such as a liquid crystal display, a display of a PC, a display such as a projection on a projector, and an audio output can be considered. The present invention is not particularly limited in function by the input / output method.
 [画像解析結果提供処理]
 図3は、カメラ100から未知画像を取得し、コンピュータ200で画像解析処理を行い、画像解析結果を提供する場合のフローチャート図である。上述した各モジュールが実行する処理について、本処理にあわせて説明する。
[Image analysis result provision processing]
FIG. 3 is a flow chart in the case of acquiring an unknown image from the camera 100, performing image analysis processing by the computer 200, and providing an image analysis result. The processing executed by each module described above will be described along with this processing.
 まず、コンピュータ200の記憶モジュール231は、記憶部230に複数の学習済みモデルを記憶する(ステップS301)。学習済みモデルは、他のコンピュータや記憶媒体から取得しても良いし、コンピュータ200で作成しても良い。また、記憶部230に学習済みモデルを記憶するための専用のデータベースを設けても良い。ステップS301の処理は、既に複数の学習済みモデルが記憶されている場合、新しい学習済みモデルが存在しない場合にはスキップしてよいものとする。 First, the storage module 231 of the computer 200 stores a plurality of learned models in the storage unit 230 (step S301). The learned model may be acquired from another computer or storage medium, or may be created by the computer 200. Further, the storage unit 230 may be provided with a dedicated database for storing the learned model. The process of step S301 may be skipped if there is no new learned model, if a plurality of learned models have already been stored.
 図10は、踏切A・踏切B・踏切Cの学習済みモデルとして、機械学習により算出した数式およびパラメータと、機械学習に使用した画像の一例を示す図である。また、まだ学習済みモデルの作成されていない踏切Dの未知画像の一例を示す図である。 FIG. 10 is a diagram showing an example of a mathematical expression and parameters calculated by machine learning and an image used for machine learning as a learned model of level crossing A, level crossing B and level crossing C. In addition, it is a figure which shows an example of the unknown image of the level crossing D in which the learning completed model is not yet created.
 図12は、カメラ毎の学習済みモデルのデータ構造を示す表の一例である。本発明において、学習済みモデルとは、カメラ毎に、被写体を画像解析するための数式とパラメータとを対応付けたものである。あわせて、学習済みモデルを算出するための機械学習に使用した、教師データ付きの画像ファイルを関連づけても良い。また、カメラ毎の撮像条件として、撮像角度と撮像位置を関連づけて記憶させても良い。図10の踏切Aを撮像したカメラAによる教師ありデータを使用して作成した学習済みモデルが学習済みモデルA、踏切Bを撮像したカメラBによる教師ありデータを使用して作成した学習済みモデルが学習済みモデルB、踏切Cを撮像したカメラCによる教師ありデータを使用して作成した学習済みモデルが学習済みモデルCである。また、カメラDで踏切Dを撮像した画像に対しては、学習済みモデルは未作成である。 FIG. 12 is an example of a table showing a data structure of a learned model for each camera. In the present invention, a learned model is a model in which an equation for analyzing an image of a subject is associated with a parameter for each camera. At the same time, the image file with supervised data used in machine learning for calculating a learned model may be associated. Further, as an imaging condition for each camera, an imaging angle and an imaging position may be associated and stored. The learned model created using the supervised data by the camera A which captured the crossing A in FIG. 10 is the learned model created using the supervised data by the camera B which captured the learned model A and the crossing B. The learned model C created by using the trained model B and the supervised data from the camera C that has captured the level crossing C is the trained model C. Further, for an image obtained by imaging the level crossing D with the camera D, a learned model is not created.
 図11は、カメラ100とコンピュータ200、被写体400との関係を、模式的に説明するための図である。カメラ100とコンピュータ200とは、通信網300を介して通信可能であるものとする。本発明でのカメラ100は、被写体までの距離を測定可能な撮像装置である。被写体までの距離を測定する方法については、カメラ100のセンサ等から取得する他に、被写体を複数の異なる方向から同時に撮像可能である場合には、その複数のカメラそれぞれで撮像した画像のズレの長さと、実際の距離を学習させて、距離を測定することも可能である。また、その測定した距離を用いて、撮像角度を算出することも可能である。更に、カメラ100の場所が固定である場合には、撮像場所までの距離を、明示的に指定できるようにしてもよい。また、撮像角度については、水平方向から、何度カメラ100が傾いているかを撮像角度とする。図11の例では、被写体400について、カメラ100の撮像角度は30度、撮像位置つまり撮像距離は5-6mである。ここでは、撮像条件として、撮像角度と撮像距離を例として挙げたが、このほかに、踏切への進入検知であれば、警報器の有無、遮断機の有無や形状、線路が単線か複線か、等の、未知画像と学習済みモデルの類似度の判定に役立つ情報を、含めて良いものとする。 FIG. 11 is a diagram for schematically describing the relationship between the camera 100, the computer 200, and the subject 400. It is assumed that the camera 100 and the computer 200 can communicate with each other via the communication network 300. The camera 100 in the present invention is an imaging device capable of measuring the distance to a subject. As for the method of measuring the distance to the subject, in the case where the subject can be simultaneously imaged from a plurality of different directions in addition to acquisition from the sensor or the like of the camera 100, the deviation of the image captured by each of the plurality of cameras It is also possible to measure the distance by learning the length and the actual distance. Also, it is possible to calculate the imaging angle using the measured distance. Furthermore, when the location of the camera 100 is fixed, the distance to the imaging location may be explicitly specified. Further, with regard to the imaging angle, the number of times the camera 100 is inclined from the horizontal direction is taken as the imaging angle. In the example of FIG. 11, for the subject 400, the imaging angle of the camera 100 is 30 degrees, and the imaging position, that is, the imaging distance is 5-6 m. Here, as an imaging condition, an imaging angle and an imaging distance were taken as an example, but in addition to this, in the case of entrance detection to a level crossing, the presence or absence of an alarm, the presence or absence of a circuit breaker, and whether the line is a single line or a double line , Etc., may be included to help determine the degree of similarity between the unknown image and the learned model.
 記憶部230に複数の学習済みモデルを記憶する前段階として、教師あり学習による機械学習のためには、適切なラベルをつけた大量の画像を教師データとして用意して機械学習させる必要があり、また、学習のための時間も長期間必要となる。そのため、のちに新たな未知画像の解析を行う場合を想定して、カメラ毎の撮像条件に、幅を持たせた学習済みモデルを用意しておくことが好ましい。また、前記教師あり学習のためのラベルについても、実際にどの程度詳細な画像解析結果を提供する必要があるのかを想定して、踏切の進入検知であれば、「進入あり・進入なし」のみで良いのか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等、詳しい区分が必要なのかを考慮して、システムの目的にあわせて付加する必要がある。 As a preliminary step of storing a plurality of learned models in the storage unit 230, for machine learning by supervised learning, it is necessary to prepare a large number of images with appropriate labels as teaching data and perform machine learning. In addition, it takes a long time for learning. Therefore, it is preferable to prepare a learned model in which a width is given to imaging conditions for each camera on the assumption that a new unknown image is to be analyzed later. In addition, with regard to the label for supervised learning, assuming that it is necessary to actually provide a detailed image analysis result, in the case of entry detection of level crossing, only “with entry / without entry”. "No entry, there is an entry (adult), there is an entry (adult), there is an entry (child), there is an entry (old man), there is an entry (vehicle), there is an entry (bicycle), there is an entry (animal), there is a possibility of entry" It is necessary to add in accordance with the purpose of the system, considering whether detailed division is necessary.
 図3に戻り、コンピュータ200の取得モジュール211は、カメラ100に対して、画像の送信を要求する(ステップ302)。この送信要求の時点で、カメラ100の画像に対する学習済みモデルが存在しない場合には、カメラ100から取得する画像は、未知画像であるということになる。 Returning to FIG. 3, the acquisition module 211 of the computer 200 requests the camera 100 to transmit an image (step 302). If there is no learned model for the image of the camera 100 at the time of the transmission request, the image acquired from the camera 100 is an unknown image.
 カメラ100は、コンピュータ200からの画像送信要求を受けて、撮像部10で撮像を行う(ステップS303)。 In response to the image transmission request from the computer 200, the camera 100 performs imaging with the imaging unit 10 (step S303).
 そして、カメラ100は通信部120を介して、未知画像である撮像データを、コンピュータ200に送信する(ステップS304)。 Then, the camera 100 transmits imaging data, which is an unknown image, to the computer 200 via the communication unit 120 (step S304).
 コンピュータ200の取得モジュール211は、未知画像を取得する(ステップS305)。ここで未知画像とあわせて、撮像角度や撮像位置等の撮像条件を、カメラ100から取得するものとする。取得モジュール211は、カメラ100がリアルタイムに撮像を行っている画像の取得を行うだけでなく、カメラ100が過去に撮像して記憶部130に保存しておいた画像を取得しても良い。 The acquisition module 211 of the computer 200 acquires an unknown image (step S305). Here, imaging conditions such as an imaging angle and an imaging position are acquired from the camera 100 together with the unknown image. The acquisition module 211 may acquire an image captured by the camera 100 in the past and stored in the storage unit 130, in addition to acquiring an image captured by the camera 100 in real time.
 次に、コンピュータ200の選択モジュール212は、ステップS301で記憶した学習済みモデルの中から、ステップS305で取得した未知画像と撮像条件が似ている学習済みモデルを選択する(ステップS306)。ここで、例えば、撮像された未知画像が、図10の踏切Dであった場合には、学習済みモデルが存在する踏切A、踏切B、踏切Cのいずれからの学習済みモデルから、撮像条件が似ているものを選択する。踏切Dを撮像したカメラDの撮像角度が20度、撮像距離が4-5mであるとし、更に踏切Dを撮像した画像の構図等を分析した結果、ここでは、学習済みモデルBを選択するものとする。 Next, the selection module 212 of the computer 200 selects a learned model having similar imaging conditions to the unknown image acquired in step S305 from the learned models stored in step S301 (step S306). Here, for example, when the captured unknown image is the crossing D in FIG. 10, the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
 次に、コンピュータ200の画像解析モジュール213は、学習済みモデルBを使用して、カメラDで撮像された未知画像の画像解析を行う(ステップS307)。 Next, the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model B (step S307).
 図13は、カメラDで撮像した未知画像の学習済みモデルが無い場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。ステップS306で、カメラDで撮像した踏切Dと撮像条件が似ている学習済みモデルとして、学習済みモデルBを選択したので、図13のカメラDの欄には、使用モデルとして学習済みモデルB、数式としてyyyyyy、パラメータとしてBBB、b、βを使用するものとして、表を埋めている。ここで、撮像条件である撮像角度と撮像位置には、実際のカメラDでの撮像条件を埋めておくことが望ましい。また、カメラDでの撮像画像に対しては、教師あり学習を行っていないので、教師データの欄は空欄としておいてよいものとする。以降、カメラDの学習済みモデルが作成されるか、又は、他の学習済みモデルの増加により、選択モジュール212で再度カメラDに最も適した学習済みモデルの選択を行うまでは、この図13の表を利用して、カメラDの画像解析を行うことが可能である。 FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D. In step S306, since the learned model B is selected as a learned model whose imaging conditions are similar to the level crossing D imaged by the camera D, the learned model B as a use model is selected in the column of camera D in FIG. The table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and β are used as parameters. Here, it is desirable to fill in the imaging conditions with the actual camera D at the imaging angle and the imaging position which are the imaging conditions. In addition, since supervised learning is not performed on an image captured by the camera D, the field of the teacher data may be left blank. After that, until the learned model of the camera D is created or the number of other learned models increases, the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
 最後に、コンピュータ200の提供モジュール241は、コンピュータ200の入出力部240に、画像解析結果の提供を行う(ステップS308)。踏切Dへの進入検知であれば、「進入あり・進入なし」のみか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等詳しく表示するのか、又は、画像に対して表示を行うだけでなく、警告音や光で結果を提示するのか等、画像解析結果提供システムの目的にあわせた出力を行うものとする。また、ここでは、踏切への進入検知を例として説明してきたが、画像解析は、例えば、個人判定用の顔認識、農作物の害虫被害状況の判別、倉庫内の在庫確認、医療診断用の患部画像認識、等、システムの目的に応じた適切なものに応用可能であるとする。また、画像解析結果の提供は、コンピュータ200の入出力部240への出力に限る必要はなく、通信部220を介して、他のデバイスへの出力を行う等、システムにあわせた出力を行うものとする。 Finally, the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S308). In the case of entry detection to the railroad crossing D, only "with entry, no entry" or "without entry, with entry (adult), with entry (child), with entry (old man), with entry (vehicle), with entry (Bicycle) · There is an entry (animal) · There is a possibility of entry etc. Or image display results such as displaying the result with a warning sound or light as well as displaying the image Output shall be made in accordance with the purpose of the provided system. In addition, here, the detection of entrance to a level crossing has been described as an example, but image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition. Further, the provision of the image analysis result does not have to be limited to the output to the input / output unit 240 of the computer 200, and the output according to the system is performed such as outputting to other devices via the communication unit 220. I assume.
 このように、本発明によれば、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力することが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することが可能となる。 As described above, according to the present invention, among a plurality of patterns of machine-learned learned models in which artificial intelligence has analyzed a known image, a known image whose imaging condition is similar to an unknown image to be newly analyzed An image analysis result providing system, an image analysis result providing method, and a program capable of outputting an accurate image analysis result without spending a learning time by selecting and using a learned model of It becomes possible.
 [学習済みモデル作成処理]
 図4は、未知画像の学習済みモデル作成処理を行う場合の、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。図2の構成に加え、コンピュータ200の制御部210は記憶部230と協働して作成モジュール214を実現する。図5は、未知画像の学習済みモデル作成処理を行う場合の、カメラ100とコンピュータ200のフローチャート図である。上述した各モジュールが実行する処理について、本処理にあわせて説明する。図5のステップS501からステップS503の処理は、図3のステップS301からステップS303の処理に相当するため、ステップS504以降について説明する。ステップS501の処理は、ステップS301と同じく、既に複数の学習済みモデルが記憶されている場合、新しい学習済みモデルが存在しない場合にはスキップしてよいものとする。
[Learned model creation processing]
FIG. 4 is a view showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions in the case of performing a learned model creation process of an unknown image. In addition to the configuration of FIG. 2, the control unit 210 of the computer 200 cooperates with the storage unit 230 to implement the creation module 214. FIG. 5 is a flowchart of the camera 100 and the computer 200 in the case of performing a learned model creation process of an unknown image. The processing executed by each module described above will be described along with this processing. The processes in steps S501 to S503 in FIG. 5 correspond to the processes in steps S301 to S303 in FIG. As in step S301, the process of step S501 may be skipped if a plurality of learned models are already stored and if a new learned model does not exist.
 カメラ100は、コンピュータ200からの画像送信要求を受けて、撮像部10で撮像した未知画像を、通信部120を介して、コンピュータ200に送信する(ステップS504)。ここでは、未知画像の学習済みモデル作成処理を行うため、カメラ100で撮像したできるだけ多くの未知画像を取得することが望ましい。そこで、カメラ100がリアルタイムに撮像を行っている画像だけでなく、カメラ100が過去に撮像して記憶部130に保存しておいた画像を送信しても良い。 In response to the image transmission request from the computer 200, the camera 100 transmits the unknown image captured by the imaging unit 10 to the computer 200 via the communication unit 120 (step S504). Here, in order to perform learning model creation processing of an unknown image, it is desirable to acquire as many unknown images as possible captured by the camera 100. Therefore, not only an image captured by the camera 100 in real time, but also an image captured by the camera 100 in the past and stored in the storage unit 130 may be transmitted.
 コンピュータ200の取得モジュール211は、複数の未知画像を取得する(ステップS505)。ここでそれぞれの未知画像とあわせて、撮像角度や撮像位置等の撮像条件を、カメラ100から取得するものとする。 The acquisition module 211 of the computer 200 acquires a plurality of unknown images (step S505). Here, imaging conditions such as an imaging angle and an imaging position are acquired from the camera 100 together with the respective unknown images.
 次に、コンピュータ200の作成モジュール214は、ステップS505で取得した未知画像に、教師データを付与する(ステップS506)。ここでは、取得した複数の未知画像に対して、画像解析結果の正解となるラベルを付加する作業を教師データの付与とする。前述した通り、教師あり学習のためのラベルについては、実際にどの程度詳細な画像解析結果を提供する必要があるのかを想定して、踏切の進入検知であれば、「進入あり・進入なし」のみで良いのか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等、詳しい区分が必要なのかを考慮して、システムの目的にあわせて付加する必要がある。 Next, the creation module 214 of the computer 200 assigns teacher data to the unknown image acquired in step S505 (step S506). Here, an operation of adding a label that is a correct answer of the image analysis result to a plurality of acquired unknown images is assigned as teacher data. As mentioned above, with regard to labels for supervised learning, assuming that it is necessary to provide detailed image analysis results in actuality, if there is entry detection at a level crossing, “with entry / without entry” There is a possibility of no entry, entry (adult), entry (adult), entry (child), entry (old man), entry (vehicle), entry (bicycle), entry (animal), entry It is necessary to add in accordance with the purpose of the system, considering whether detailed division is necessary.
 コンピュータ200の作成モジュール214は、教師データを付与した未知画像を使用して、教師あり学習による機械学習を行う(ステップS507)。 The creation module 214 of the computer 200 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S507).
 次に、作成モジュール214は、ステップS507の機械学習の結果を基に、未知画像の学習済みモデルを作成する(ステップS508)。 Next, the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S507 (step S508).
 最後に、記憶モジュール231により、未知画像の学習済みモデルを記憶部230に記憶する(ステップS509)。 Finally, the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S509).
 図14は、カメラDで撮像した未知画像の学習済みモデルが作成された場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。ステップS508で作成した、カメラDの学習済みモデルDを、図14のカメラDの欄に記載したものである。カメラDの使用モデルとして学習済みモデルD、数式としてvvvvvvvv、パラメータとしてDDD、d、dDを使用するものとして、表を埋めている。また、教師あり学習を行ったため、教師データの欄も教師データを記載している。以降、カメラDで、再度教師データを増やして学習済みモデルを作成するまでは、この図14の表を利用して、カメラDの画像解析を行うことが可能である。 FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created. The learned model D of the camera D created in step S508 is described in the column of the camera D in FIG. The table is filled, assuming that a learned model D as a use model of the camera D, vvvvvvvv as a mathematical expression, and DDD, d, dD as parameters are used. Also, since supervised learning was performed, the teacher data column also describes teacher data. From this point onward, it is possible to perform image analysis of the camera D using the table of FIG. 14 until the training data is increased again by the camera D to create a learned model.
 この、学習済みモデル作成処理は、新しい未知画像に対して、画像解析の精度を上げるために、適切なラベルをつけた大量の画像を用意し、時間をかけて学習を行う必要があるので、画像解析結果提供システムの運用を開始してから、新たな未知画像を受け入れる場合には大きな負担となる。そのため、新たな学習済みモデル作成処理を行えない場合に、図2、図3で説明した方法で、学習時間をかけずに、ある程度精度のよい画像解析結果を提供することを可能とする方法を提案している。しかしながら、より良い精度を求めるためには、未知画像にあわせた機械学習を行い、より適した数式やパラメータの学習済みモデルを作成することが望ましい。そのため、機械学習に必要な大量の未知画像が蓄積できたタイミングや、適切な正解ラベルを付加できたタイミングに応じて、他の画像解析作業に影響のでないよう、システムに負荷をかけないように実行する等、画像解析結果提供システムの運用にあわせた適切な実施が必要である。 Since it is necessary to prepare a large number of images with appropriate labels and to take time to learn, in order to raise the accuracy of image analysis for a new unknown image, this trained model creation processing needs time-consuming learning. It becomes a heavy burden when accepting a new unknown image after starting the operation of the image analysis result provision system. Therefore, when new learned model creation processing can not be performed, it is possible to provide an image analysis result with a certain degree of accuracy without spending a learning time by the method described in FIGS. 2 and 3. is suggesting. However, in order to obtain better accuracy, it is desirable to perform machine learning in accordance with an unknown image and create a learned model of mathematical expressions and parameters more suitable. Therefore, according to the timing at which a large number of unknown images required for machine learning can be stored and the timing at which an appropriate correct label can be added, the system should not be stressed so as not to affect other image analysis tasks. It is necessary to properly execute the system according to the operation of the image analysis result provision system, such as execution.
 このように、本発明によれば、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力しつつ、更に、学習済みモデル作成処理により新たな未知画像にあわせた機械学習を行い、学習済みモデルを作成することにより、更に画像解析結果を向上させることが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することが可能となる。 As described above, according to the present invention, among a plurality of patterns of machine-learned learned models in which artificial intelligence has analyzed a known image, a known image whose imaging condition is similar to an unknown image to be newly analyzed By selecting and using the already-learned model, while outputting accurate image analysis results without spending learning time, machine learning in accordance with the new unknown image is performed by learning-made model creation processing. By creating a learned model, it is possible to provide an image analysis result providing system, an image analysis result providing method, and a program that can further improve the image analysis result.
 [画像解析切り替え処理]
 図6は、未知画像の学習済みモデル作成処理が終わっているかに応じて、画像解析処理を切り替える場合の、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。図7は、図6のフローチャートのAに当たる処理で、未知画像の学習済みモデル作成処理が終わっていない場合に、未知画像の機械学習が可能である場合にコンピュータ200で行う未知画像の学習済みモデル作成処理のフローチャート図である。図8は、図6のフローチャートのBに当たる処理で、未知画像の学習済みモデル作成処理が終わっていない場合に、未知画像の機械学習が不可能である場合にコンピュータ200で行う学習済みモデル選択処理のフローチャート図である。カメラ100とコンピュータ200の構成は、図4と同様である。また、図6のステップS601からステップS605の処理は、図3のステップS301からステップS305の処理に相当するため、ステップS606以降について説明する。
[Image analysis switching process]
FIG. 6 is a diagram showing the relationship between the functional blocks of the camera 100 and the computer 200 and the respective functions when the image analysis process is switched depending on whether the learned model creation process of the unknown image is finished. FIG. 7 is a process corresponding to A of the flowchart of FIG. 6, and when the learning model creation process of the unknown image is not completed, the learned model of the unknown image performed by the computer 200 when the machine learning of the unknown image is possible. It is a flowchart figure of preparation processing. FIG. 8 is a process corresponding to B of the flowchart of FIG. 6, and when the learned model creation process of the unknown image is not finished, the learned model selection process performed by the computer 200 when the machine learning of the unknown image is impossible. FIG. The configurations of the camera 100 and the computer 200 are the same as in FIG. Further, since the processes of step S601 to step S605 of FIG. 6 correspond to the processes of step S301 to step S305 of FIG. 3, step S606 and subsequent steps will be described.
 コンピュータ200の作成モジュール214は、ステップS605で取得した撮像データについて、未知画像の学習済みモデル作成済みかの確認を行う(ステップS606)。ここで、ステップS605で取得した撮像データが、初めてのカメラ100からの取得データであれば、未知画像であるため、未知画像の学習済みモデルは作成されていないと考えられる。 The creation module 214 of the computer 200 confirms whether the learned model of the unknown image has been created for the imaging data acquired in step S605 (step S606). Here, if the imaging data acquired in step S605 is acquisition data from the camera 100 for the first time, it is an unknown image, so it is considered that a learned model of the unknown image is not created.
 未知画像の学習済みモデルは作成されていない場合、作成モジュール214は、ステップS605で取得した未知画像やそれ以前に記憶した未知画像を使用して、機械学習が可能であるかどうかを判断する(ステップS607)。ここでは、機械学習に必要な適切なラベルをつけた教師データとなる大量の画像が準備されているかどうか、システムに負荷をかけ、かつ、機械学習のための時間をかけて問題ないかどうか、等を基準として判断することが可能である。また、画像解析結果提供システムの運用状況にあわせて判断してもよく、システムに応じた判定を行うものとする。 If a learned model of the unknown image is not created, the creation module 214 determines whether machine learning is possible using the unknown image acquired in step S605 or the previously stored unknown image ( Step S607). Here, whether there is a large amount of images prepared as appropriately labeled teacher data necessary for machine learning, whether the system is stressed and whether it takes time for machine learning or not, It can be judged on the basis of Further, the determination may be made according to the operation status of the image analysis result providing system, and the determination according to the system is performed.
 ここで、未知画像の機械学習が可能であると判断した場合、図7の処理Aのフローチャートに進む(ステップS608)。 Here, if it is determined that machine learning of the unknown image is possible, the process proceeds to the flowchart of process A of FIG. 7 (step S608).
 図7の処理Aにおいて、コンピュータ200の作成モジュール214は、ステップS605で取得した未知画像やそれ以前に記憶した未知画像に、教師データを付与する(ステップS701)。ここでは、取得した複数の未知画像に対して、画像解析結果の正解となるラベルを付加する作業を教師データの付与とする。前述した通り、教師あり学習のためのラベルについては、実際にどの程度詳細な画像解析結果を提供する必要があるのかを想定して、踏切の進入検知であれば、「進入あり・進入なし」のみで良いのか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等、詳しい区分が必要なのかを考慮して、システムの目的にあわせて付加する必要がある。 In process A of FIG. 7, the creation module 214 of the computer 200 adds teacher data to the unknown image acquired in step S605 or the unknown image stored before that (step S701). Here, an operation of adding a label that is a correct answer of the image analysis result to a plurality of acquired unknown images is assigned as teacher data. As mentioned above, with regard to labels for supervised learning, assuming that it is necessary to provide detailed image analysis results in actuality, if there is entry detection at a level crossing, “with entry / without entry” There is a possibility of no entry, entry (adult), entry (adult), entry (child), entry (old man), entry (vehicle), entry (bicycle), entry (animal), entry It is necessary to add in accordance with the purpose of the system, considering whether detailed division is necessary.
 コンピュータ200の作成モジュール214は、教師データを付与した未知画像を使用して、教師あり学習による機械学習を行う(ステップS702)。 The creating module 214 of the computer 200 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S702).
 次に、作成モジュール214は、ステップS702の機械学習の結果を基に、未知画像の学習済みモデルを作成する(ステップS703)。 Next, the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S702 (step S703).
 次に、記憶モジュール231により、未知画像の学習済みモデルを記憶部230に記憶する(ステップS704)。 Next, the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S704).
 図14は、カメラDで撮像した未知画像の学習済みモデルが作成された場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。ステップS703で作成した、カメラDの学習済みモデルDを、図14のカメラDの欄に記載したものである。カメラDの使用モデルとして学習済みモデルD、数式としてvvvvvvvv、パラメータとしてDDD、d、dDを使用するものとして、表を埋めている。また、教師あり学習を行ったため、教師データの欄も教師データを記載している。以降、カメラDで、再度教師データを増やして学習済みモデルを作成するまでは、この図14の表を利用して、カメラDの画像解析を行うことが可能である。 FIG. 14 is an example of a table showing a learned model used for image analysis for each camera when a learned model of an unknown image captured by a camera D is created. The learned model D of the camera D created in step S703 is described in the column of the camera D in FIG. The table is filled, assuming that a learned model D as a use model of the camera D, vvvvvvvv as a mathematical expression, and DDD, d, dD as parameters are used. Also, since supervised learning was performed, the teacher data column also describes teacher data. From this point onward, it is possible to perform image analysis of the camera D using the table of FIG. 14 until the training data is increased again by the camera D to create a learned model.
 次に、コンピュータ200の画像解析モジュール213は、作成した学習済みモデルDを使用して、カメラDで撮像された未知画像の画像解析を行う(ステップS705)。 Next, the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the created learned model D (step S705).
 次に、コンピュータ200の提供モジュール241は、コンピュータ200の入出力部240に、画像解析結果の提供を行う(ステップS706)。この後、図6のフローチャートに戻り、ステップS614に進む。 Next, the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S706). Thereafter, the process returns to the flowchart of FIG. 6 and proceeds to step S614.
 ステップS607で、未知画像の機械学習が可能でないと判断した場合、作成モジュール214は、ステップS605で取得した未知画像を、記憶部230に記憶する(ステップS609)。これは、後に未知画像に対する学習済みモデル作成処理のための機械学習を行う場合に、教師データとして利用するためである。 If it is determined in step S607 that machine learning of the unknown image is not possible, the creation module 214 stores the unknown image acquired in step S605 in the storage unit 230 (step S609). This is for later use as teacher data when performing machine learning for learned model creation processing on an unknown image.
 ステップS609の記憶処理後、図8の処理Bのフローチャートに進む(ステップS610)。 After the storage process of step S609, the process proceeds to the flowchart of process B of FIG. 8 (step S610).
 コンピュータ200の選択モジュール212は、ステップS601で記憶した学習済みモデルの中から、ステップS605で取得した未知画像と撮像条件が似ている学習済みモデルを選択する(ステップS801)。ここで、例えば、撮像された未知画像が、図10の踏切Dであった場合には、学習済みモデルが存在する踏切A、踏切B、踏切Cのいずれからの学習済みモデルから、撮像条件が似ているものを選択する。踏切Dを撮像したカメラDの撮像角度が20度、撮像距離が4-5mであるとし、更に踏切Dを撮像した画像の構図等を分析した結果、ここでは、学習済みモデルBを選択するものとする。 The selection module 212 of the computer 200 selects a learned model whose imaging condition is similar to the unknown image acquired in step S605 from the learned models stored in step S601 (step S801). Here, for example, when the captured unknown image is the crossing D in FIG. 10, the imaging condition is from the learned model from any of the crossing A, the crossing B, and the crossing C where the learned model exists. Choose something similar. Assuming that the imaging angle of the camera D that captures the crossing D is 20 degrees, the imaging distance is 4-5 m, and further analysis of the composition of the image that captures the crossing D results in selecting the learned model B here I assume.
 次に、画像解析モジュール213は、学習済みモデルBを使用して、カメラDで撮像された未知画像の画像解析を行う(ステップS802)。 Next, the image analysis module 213 performs image analysis of the unknown image captured by the camera D using the learned model B (step S802).
 図13は、カメラDで撮像した未知画像の学習済みモデルが無い場合の、カメラ毎に画像解析に使用する学習済みモデルを示す表の一例である。ステップS801で、カメラDで撮像した踏切Dと撮像条件が似ている学習済みモデルとして、学習済みモデルBを選択したので、図13のカメラDの欄には、使用モデルとして学習済みモデルB、数式としてyyyyyy、パラメータとしてBBB、b、βを使用するものとして、表を埋めている。ここで、撮像条件である撮像角度と撮像位置には、実際のカメラDでの撮像条件を埋めておくことが望ましい。また、カメラDでの撮像画像に対しては、教師あり学習を行っていないので、教師データの欄は空欄としておいてよいものとする。以降、カメラDの学習済みモデルが作成されるか、又は、他の学習済みモデルの増加により、選択モジュール212で再度カメラDに最も適した学習済みモデルの選択を行うまでは、この図13の表を利用して、カメラDの画像解析を行うことが可能である。 FIG. 13 is an example of a table showing a learned model used for image analysis for each camera when there is no learned model of the unknown image captured by the camera D. In step S801, the learned model B is selected as a learned model having similar imaging conditions to the level crossing D captured by the camera D. Therefore, in the field of the camera D in FIG. The table is filled assuming that yyyyyy is used as a mathematical expression and BBB, b and β are used as parameters. Here, it is desirable to fill in the imaging conditions with the actual camera D at the imaging angle and the imaging position which are the imaging conditions. In addition, since supervised learning is not performed on an image captured by the camera D, the field of the teacher data may be left blank. After that, until the learned model of the camera D is created or the number of other learned models increases, the selection module 212 selects the learned model most suitable for the camera D again, as shown in FIG. Image analysis of the camera D can be performed using the table.
 次に、提供モジュール241は、コンピュータ200の入出力部240に、画像解析結果の提供を行う(ステップS803)。この後、図6のフローチャートに戻り、ステップS614に進む。 Next, the provision module 241 provides the input / output unit 240 of the computer 200 with the image analysis result (step S803). Thereafter, the process returns to the flowchart of FIG. 6 and proceeds to step S614.
 図6のステップS606で、未知画像の学習済みモデルが作成されている場合、既に処理Aのフローを通り、未知画像の学習済みモデルが作成されたものと考えられる。この場合、選択モジュール212は、処理AのステップS703で作成した学習済みモデルDを選択して適用する(ステップS611)。 When the learned model of the unknown image is created in step S606 in FIG. 6, it is considered that the learned model of the unknown image has been created through the flow of process A. In this case, the selection module 212 selects and applies the learned model D created in step S703 of process A (step S611).
 次に、コンピュータ200の画像解析モジュール213は、学習済みモデルDを使用して、カメラDで撮像された未知画像の画像解析を行う(ステップS612)。 Next, the image analysis module 213 of the computer 200 performs image analysis of the unknown image captured by the camera D using the learned model D (step S612).
 次に、コンピュータ200の提供モジュール241は、コンピュータ200の入出力部240に、画像解析結果の提供を行う(ステップS613)。 Next, the provision module 241 of the computer 200 provides the input / output unit 240 of the computer 200 with the image analysis result (step S613).
 ステップS706、ステップS803、ステップS613で、それぞれ提供モジュール241は、コンピュータ200の入出力部240に、画像解析結果の提供を行うが、踏切Dへの進入検知であれば、「進入あり・進入なし」のみか、「進入なし・進入あり(大人)・進入あり(子ども)・進入あり(老人)・進入あり(車両)・進入あり(自転車)・進入あり(動物)・進入の可能性あり」等詳しく表示するのか、又は、画像に対して表示を行うだけでなく、警告音や光で結果を提示するのか等、画像解析結果提供システムの目的にあわせた出力を行うものとする。また、ここでは、踏切への進入検知を例として説明してきたが、画像解析は、例えば、個人判定用の顔認識、農作物の害虫被害状況の判別、倉庫内の在庫確認、医療診断用の患部画像認識、等、システムの目的に応じた適切なものに応用可能であるとする。また、画像解析結果の提供は、コンピュータ200の入出力部240への出力に限る必要はなく、通信部220を介して、他のデバイスへの出力を行う等、システムにあわせた出力を行うものとする。 In step S706, step S803, and step S613, the provision module 241 provides the image analysis result to the input / output unit 240 of the computer 200. "No entry, no entry (adult), entry (adult), entry (child), entry (old man), entry (vehicle), entry (bicycle), entry (animal), entry possibility" It is assumed that the output according to the purpose of the image analysis result providing system is performed, such as displaying in detail, or displaying the result with a warning sound or light as well as displaying the image. In addition, here, the detection of entrance to a level crossing has been described as an example, but image analysis includes, for example, face recognition for personal determination, discrimination of pest damage status of agricultural products, inventory confirmation in a warehouse, affected area for medical diagnosis It is assumed that the present invention is applicable to an appropriate one according to the purpose of the system, such as image recognition. Further, the provision of the image analysis result does not have to be limited to the output to the input / output unit 240 of the computer 200, and the output according to the system is performed such as outputting to other devices via the communication unit 220. I assume.
 最後に、画像解析結果提供処理を終了してよいか確認し(ステップS614)、終了しない場合には、ステップS602に戻って処理を継続し、終了する場合には、画像解析結果提供処理を終了する。 Finally, it is confirmed whether the image analysis result provision process may be ended (step S614). If not completed, the process returns to step S602 to continue the process, and when completed, the image analysis result provision process is ended. Do.
 このように、本発明によれば、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力しつつ、更に、学習済みモデル作成処理により新たな未知画像にあわせた機械学習を行い、学習済みモデルを作成することが可能である。また、新たな学習済みモデルが作成されるまでの間は、既知画像の学習済みモデルを選択して利用し、新たな学習済みモデルが作成された後には、より精度の良い専用の学習済みモデルを利用することで、システムの引用開始時点から、精度が高く、なおかつその精度をより高めることのできる画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することが可能となる。 As described above, according to the present invention, among a plurality of patterns of machine-learned learned models in which artificial intelligence has analyzed a known image, a known image whose imaging condition is similar to an unknown image to be newly analyzed By selecting and using the already-learned model, while outputting accurate image analysis results without spending learning time, machine learning in accordance with the new unknown image is performed by learning-made model creation processing. , It is possible to create a trained model. Also, until a new learned model is created, a trained model of known images is selected and used, and after a new learned model is created, a more accurate dedicated learned model By using this, it is possible to provide an image analysis result providing system, an image analysis result providing method, and a program having high accuracy and capable of further improving the accuracy from the start of citation of the system.
 [未知画像の画像解析結果を利用した学習済みモデル作成処理]
 図9は、未知画像の学習済みモデル作成処理を行う場合に、未知画像の画像解析結果を教師データとして付与して、未知画像の機械学習を行う場合の、コンピュータ200のフローチャート図である。カメラ100とコンピュータ200の構成は、図4と同様である。図9では、ステップS901から記載しているが、この前に、図3のステップS301からステップS308に相当する処理が行われており、選択した学習済みモデルによる未知画像の画像解析が行われ、画像解析結果を入手済みであるものとする。
[Trained model creation processing using image analysis result of unknown image]
FIG. 9 is a flowchart of the computer 200 in the case of performing machine learning of an unknown image by adding an image analysis result of the unknown image as teacher data when performing a learned model creation process of the unknown image. The configurations of the camera 100 and the computer 200 are the same as in FIG. In FIG. 9, although described from step S901, before this, processing equivalent to step S301 to step S308 in FIG. 3 is performed, and image analysis of an unknown image by the selected learned model is performed, It is assumed that the image analysis result has been obtained.
 コンピュータ200の作成モジュール214は、ステップS305で取得した未知画像に対して、ステップS307の画像解析結果を、教師データとして付与する(ステップS901)。ここで、選択した学習済みモデルによる未知画像の画像解析結果を、そのまま教師データとして付与することで、機械学習に必要な大量の画像に手動で正解データとなる教師データを付加するというコストを、大幅に削減することが可能となる。 The creation module 214 of the computer 200 assigns the image analysis result in step S307 as teacher data to the unknown image acquired in step S305 (step S901). Here, by adding the image analysis result of the unknown image according to the selected learned model as teaching data as it is, the cost of manually adding teacher data to be correct data to a large amount of images necessary for machine learning is It will be possible to reduce significantly.
 作成モジュール214は、教師データを付与した未知画像を使用して、教師あり学習による機械学習を行う(ステップS902)。 The creating module 214 performs machine learning by supervised learning using the unknown image to which the teacher data is added (step S902).
 次に、作成モジュール214は、ステップS902の機械学習の結果を基に、未知画像の学習済みモデルを作成する(ステップS903)。 Next, the creation module 214 creates a learned model of the unknown image based on the result of the machine learning in step S902 (step S903).
 最後に、記憶モジュール231により、未知画像の学習済みモデルを記憶部230に記憶する(ステップS904)。 Finally, the storage module 231 stores the learned model of the unknown image in the storage unit 230 (step S904).
 このように、本発明によれば、人工知能が既知画像を画像解析した機械学習済みの学習済みモデルの複数パターンの中から、新たに画像解析させたい未知画像と撮像条件が似ている既知画像の学習済みモデルを選択して利用することで、学習時間をかけずに、精度の良い画像解析結果を出力しつつ、更に、選択した学習済みモデルによる未知画像の画像解析結果を、そのまま教師データとして付与することで、機械学習に必要な大量の画像に手動で正解データとなる教師データを付加するというコストを、大幅に削減することが可能となり、最終的に、画像解析結果を向上させることが可能な画像解析結果提供システム、画像解析結果提供方法、およびプログラムを提供することが可能となる。 As described above, according to the present invention, among a plurality of patterns of machine-learned learned models in which artificial intelligence has analyzed a known image, a known image whose imaging condition is similar to an unknown image to be newly analyzed By selecting and using the already-learned model, while outputting the accurate image analysis result without taking a long learning time, furthermore, the image analysis result of the unknown image by the selected learned model is directly used as teaching data. As a result, it becomes possible to significantly reduce the cost of manually adding teacher data as correct data to a large amount of images required for machine learning, and finally improving the image analysis result. It is possible to provide an image analysis result providing system, an image analysis result providing method, and a program that can
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、コンピュータからネットワーク経由で提供される(SaaS:ソフトウェア・アズ・ア・サービス)形態であってもよいし、フレキシブルディスク、CD(CD-ROM等)、DVD(DVD-ROM、DVD-RAM等)、コンパクトメモリ等のコンピュータ読取可能な記録媒体に記録された形態で提供される。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置又は外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program. The program may be provided, for example, from a computer via a network (SaaS: software as a service), a flexible disk, a CD (CD-ROM, etc.), a DVD (DVD-ROM, DVD) Provided in the form of being recorded in a computer readable recording medium such as a RAM, a compact memory, etc. In this case, the computer reads the program from the recording medium, transfers the program to an internal storage device or an external storage device, stores it, and executes it. Alternatively, the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。 As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. Further, the effects described in the embodiments of the present invention only list the most preferable effects resulting from the present invention, and the effects according to the present invention are limited to those described in the embodiments of the present invention is not.
100 カメラ、200 コンピュータ、300 通信網、400 被写体 100 cameras, 200 computers, 300 networks, 400 subjects

Claims (8)

  1.  既知画像を画像解析した機械学習済みの学習済みモデルを記憶する記憶手段と、
     未だ学習済みモデルが作成されていない未知画像を取得する取得手段と、
     前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択する選択手段と、
     前記選択した学習済みモデルを利用して、前記未知画像を画像解析させる画像解析手段と、
     前記画像解析の結果を提供する提供手段と、
    を備えることを特徴とする画像解析結果提供システム。
    Storage means for storing a machine-learned learned model obtained by image analysis of a known image;
    Acquisition means for acquiring an unknown image for which a learned model has not yet been created;
    Selecting means for selecting a learned model of the known image having similar imaging conditions to the acquired unknown image from the stored learned models;
    Image analysis means for performing image analysis of the unknown image using the selected learned model;
    Providing means for providing the result of the image analysis;
    An image analysis result providing system comprising:
  2.  前記撮像条件が、撮像対象に対する撮像位置および撮像角度、であることを特徴とする請求項1に記載の画像解析結果提供システム。 The image analysis result providing system according to claim 1, wherein the imaging condition is an imaging position and an imaging angle with respect to an imaging target.
  3.  前記学習済みモデルが、前記機械学習により算出した前記画像解析に使用する数式およびパラメータと前記機械学習に使用した画像であることを特徴とする請求項1又は請求項2に記載の画像解析結果提供システム。 The image analysis result according to claim 1 or 2, wherein the learned model is a formula and parameters used for the image analysis calculated by the machine learning and an image used for the machine learning. system.
  4.  前記未知画像に対して、画像解析した新しい機械学習済みの学習済みモデルを作成する作成手段を備えることを特徴とする請求項1から請求項3のいずれか一項に記載の画像解析結果提供システム。 The image analysis result providing system according to any one of claims 1 to 3, further comprising: creation means for creating a new machine-learned learned model obtained by image analysis for the unknown image. .
  5.  前記画像解析手段は、前記新しい機械学習済みの学習済みモデルを作成するまでの期間に、前記選択された学習済みモデルを利用して、前記未知画像を画像解析させることを特徴とする請求項4に記載の画像解析結果提供システム。 The image analysis means is characterized in that the unknown image is subjected to image analysis using the selected learned model during a period until the new machine-learned learned model is created. Image analysis result providing system described in.
  6.  前記未知画像に対して、前記選択した学習済みモデルを利用して、前記未知画像を画像解析した解析結果を教師データとして、前記未知画像を画像解析した新しい機械学習済みの学習済みモデルを作成する作成手段を備えることを特徴とする請求項1から請求項4のいずれか一項に記載の画像解析結果提供システム。 Creating a new machine-learned learned model in which the unknown image is image-analyzed using the analysis result of the image analysis of the unknown image as teacher data using the selected learned model for the unknown image The image analysis result providing system according to any one of claims 1 to 4, further comprising creation means.
  7.  既知画像を画像解析した機械学習済みの学習済みモデルを記憶するステップと、
     未だ学習済みモデルが作成されていない未知画像を取得するステップと、
     前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択するステップと、
     前記選択した学習済みモデルを利用して、前記未知画像を画像解析させるステップと、
     前記画像解析の結果を提供するステップと、
    を備える画像解析結果提供方法。
    Storing a machine-learned learned model obtained by image analysis of a known image;
    Acquiring an unknown image for which a learned model has not yet been created;
    Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models;
    Analyzing the unknown image using the selected learned model;
    Providing the result of the image analysis;
    An image analysis result providing method comprising:
  8.  画像解析結果提供システムに、
     既知画像を画像解析した機械学習済みの学習済みモデルを記憶するステップ、
     未だ学習済みモデルが作成されていない未知画像を取得するステップ、
     前記記憶した学習済みモデルの中から、前記取得した未知画像と撮像条件が似ている前記既知画像の学習済みモデルを選択するステップ、
     前記選択した学習済みモデルを利用して、前記未知画像を画像解析させるステップ、
     前記画像解析の結果を提供するステップ、
    を実行させるためのプログラム。
    Image analysis result provision system,
    Storing a machine-learned learned model obtained by image analysis of known images;
    Acquiring an unknown image for which a learned model has not yet been created;
    Selecting a learned model of the known image whose imaging condition is similar to the acquired unknown image from the stored learned models;
    Analyzing the unknown image using the selected learned model;
    Providing the result of the image analysis;
    A program to run a program.
PCT/JP2017/023807 2017-06-28 2017-06-28 System for providing image analysis result, method for providing image analysis result, and program WO2019003355A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2017/023807 WO2019003355A1 (en) 2017-06-28 2017-06-28 System for providing image analysis result, method for providing image analysis result, and program
JP2018545247A JP6474946B1 (en) 2017-06-28 2017-06-28 Image analysis result providing system, image analysis result providing method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/023807 WO2019003355A1 (en) 2017-06-28 2017-06-28 System for providing image analysis result, method for providing image analysis result, and program

Publications (1)

Publication Number Publication Date
WO2019003355A1 true WO2019003355A1 (en) 2019-01-03

Family

ID=64741244

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/023807 WO2019003355A1 (en) 2017-06-28 2017-06-28 System for providing image analysis result, method for providing image analysis result, and program

Country Status (2)

Country Link
JP (1) JP6474946B1 (en)
WO (1) WO2019003355A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203059A (en) * 2020-10-15 2021-01-08 石家庄粮保科技有限公司 Insect detection method based on AI image recognition technology
JPWO2022070491A1 (en) * 2020-09-29 2022-04-07
WO2022113535A1 (en) 2020-11-27 2022-06-02 株式会社Jvcケンウッド Image recognition device, image recognition method, and object recognition model
WO2023135621A1 (en) * 2022-01-11 2023-07-20 三菱電機株式会社 Surveillance camera image analysis system
JP7360115B1 (en) 2022-04-13 2023-10-12 株式会社Ridge-i Information processing device, information processing method, and information processing program

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021081793A (en) 2019-11-14 2021-05-27 キヤノン株式会社 Information processing device, control method and program for information processing device
JP2022158647A (en) * 2021-04-02 2022-10-17 日立造船株式会社 Information processing device, determination method and determination program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060221A (en) * 2009-09-14 2011-03-24 Sumitomo Electric Ind Ltd Discriminator generation method, computer program, discriminator generating device and predetermined object detecting device
JP2012068965A (en) * 2010-09-24 2012-04-05 Denso Corp Image recognition device
JP2016015045A (en) * 2014-07-02 2016-01-28 キヤノン株式会社 Image recognition device, image recognition method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060221A (en) * 2009-09-14 2011-03-24 Sumitomo Electric Ind Ltd Discriminator generation method, computer program, discriminator generating device and predetermined object detecting device
JP2012068965A (en) * 2010-09-24 2012-04-05 Denso Corp Image recognition device
JP2016015045A (en) * 2014-07-02 2016-01-28 キヤノン株式会社 Image recognition device, image recognition method, and program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2022070491A1 (en) * 2020-09-29 2022-04-07
WO2022070491A1 (en) * 2020-09-29 2022-04-07 株式会社島津製作所 Image analyzing device
CN112203059A (en) * 2020-10-15 2021-01-08 石家庄粮保科技有限公司 Insect detection method based on AI image recognition technology
WO2022113535A1 (en) 2020-11-27 2022-06-02 株式会社Jvcケンウッド Image recognition device, image recognition method, and object recognition model
WO2023135621A1 (en) * 2022-01-11 2023-07-20 三菱電機株式会社 Surveillance camera image analysis system
JP7360115B1 (en) 2022-04-13 2023-10-12 株式会社Ridge-i Information processing device, information processing method, and information processing program
JP2023156898A (en) * 2022-04-13 2023-10-25 株式会社Ridge-i Information processing device, information processing method, and information processing program

Also Published As

Publication number Publication date
JP6474946B1 (en) 2019-02-27
JPWO2019003355A1 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
WO2019003355A1 (en) System for providing image analysis result, method for providing image analysis result, and program
EP3992976A1 (en) Compound property prediction method and apparatus, and computer device and readable storage medium
US20170187988A1 (en) System and method for image processing
US10860841B2 (en) Facial expression image processing method and apparatus
US11327320B2 (en) Electronic device and method of controlling the same
US20150125842A1 (en) Multimedia apparatus, online education system, and method for providing education content thereof
WO2018100676A1 (en) Camera control system, camera control method, and program
CN104252712A (en) Image generating apparatus and image generating method
US20170185365A1 (en) System and method for screen sharing
CN109271929B (en) Detection method and device
Larrue et al. Influence of body-centered information on the transfer of spatial learning from a virtual to a real environment
Zabulis et al. Multicamera human detection and tracking supporting natural interaction with large-scale displays
KR102338984B1 (en) System for providing 3D model augmented reality service using AI and method thereof
JP6525043B2 (en) DATA GENERATION DEVICE, DATA GENERATION METHOD, AND PROGRAM
Ho et al. IoTouch: whole-body tactile sensing technology toward the tele-touch
EP3769188A1 (en) Representation of user position, movement, and gaze in mixed reality space
JP6246441B1 (en) Image analysis system, image analysis method, and program
US10057321B2 (en) Image management apparatus and control method capable of automatically creating comment data relevant to an image
KR101575100B1 (en) User group activity sensing in service area and behavior semantic analysis system
KR20210044116A (en) Electronic Device and the Method for Classifying Development Condition of Infant thereof
Meilinger et al. Verbal shadowing and visual interference in spatial memory
US20200105421A1 (en) Systems and methods for topology-based clinical data mining
JP2017518592A (en) Method and system for performing an evaluation
CN111274417A (en) Emotion labeling method and device, electronic equipment and computer readable storage medium
JP2021047369A (en) Information processing device and virtual customer service system

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018545247

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17916290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17916290

Country of ref document: EP

Kind code of ref document: A1