WO2019229789A1 - Trained model suggestion system, trained model suggestion method, and program - Google Patents

Trained model suggestion system, trained model suggestion method, and program Download PDF

Info

Publication number
WO2019229789A1
WO2019229789A1 PCT/JP2018/020288 JP2018020288W WO2019229789A1 WO 2019229789 A1 WO2019229789 A1 WO 2019229789A1 JP 2018020288 W JP2018020288 W JP 2018020288W WO 2019229789 A1 WO2019229789 A1 WO 2019229789A1
Authority
WO
WIPO (PCT)
Prior art keywords
learned model
environment
image
learned
image analysis
Prior art date
Application number
PCT/JP2018/020288
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to JP2020521646A priority Critical patent/JP7068745B2/en
Priority to PCT/JP2018/020288 priority patent/WO2019229789A1/en
Publication of WO2019229789A1 publication Critical patent/WO2019229789A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a learned model proposal system, a learned model, which can propose an existing learned model similar to that by acquiring and using the purpose of image analysis and the environment in which the image is captured.
  • the present invention relates to a proposed method and a program.
  • Patent Document 1 There has been proposed a method of providing a mechanism for discriminating who is captured by performing image analysis processing on a person image and automatically categorizing the person image.
  • Patent Literature As a machine learning technique for artificial intelligence to perform image analysis, supervised learning is a well-known technique, and a method for generating a learned model suitable for the purpose has also been proposed (Patent Literature). 2).
  • the present inventor uses an existing learned model and has high accuracy without spending learning time if the purpose of image analysis and the environment in which the image is captured are compatible. We focused on the possibility of obtaining image analysis results.
  • the present invention proposes an existing learned model similar to that by acquiring and using the purpose of image analysis and the environment in which the image is captured, and provides accurate image analysis without taking up learning time. It is an object of the present invention to provide a learned model proposal system, a learned model proposal method, and a program capable of obtaining results.
  • the present invention provides the following solutions.
  • the invention according to the first feature is A system that proposes a trained model suitable for image analysis, A learned model database that stores a learned model for image analysis in association with a purpose and an environment; Purpose acquisition means for acquiring the purpose of image analysis; An environment acquisition means for acquiring an environment in which an image for the purpose is captured; With reference to the learned model database, a learned model proposing means for proposing a learned model suitable for the purpose and the environment; Provided is a learned model proposal system characterized by comprising:
  • a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment;
  • a purpose acquisition means for acquiring the purpose of image analysis, an environment acquisition means for acquiring an environment in which an image for the purpose is photographed, and learning adapted to the purpose and the environment with reference to the learned model database Learned model proposing means for proposing a completed model.
  • the invention according to the first feature is a category of the learned model suggestion system, but the learned model suggestion method and program have the same actions and effects.
  • the invention according to the second feature is a learned model proposal system that is the invention according to the first feature,
  • the learned model includes a learned classifier trained with predetermined learning data consisting of past images and correct answer data,
  • the learned model proposal means provides a learned model proposal system that proposes the learned classifier as a learned model.
  • the learned model is learned by learning with predetermined learning data including past images and correct data.
  • the learned model proposing means proposes the learned classifier as a learned model.
  • the invention according to the third feature is a learned model proposal system which is the invention according to the second feature,
  • the learned model is provided with a learned model suggestion system comprising: a classifier type when an image is classified by a classifier; and a conversion method for converting the image into a feature vector.
  • the learned model includes a classifier type when an image is classified by a classifier, and an image.
  • the invention according to the fourth feature is a learned model proposal system which is the invention according to the first feature, Provided is a learned model suggestion system, wherein the learned model is a learned convolutional neural network learned with predetermined learning data including past images and correct answer data.
  • the learned model learns with predetermined learning data composed of past images and correct data.
  • This is a convolutional neural network.
  • the invention according to the fifth feature is a learned model proposing system that is an invention according to any one of the second feature to the fourth feature, Image acquisition means for acquiring an image of an environment to be subjected to the image analysis; Image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar, When the images are similar, the learned model proposing means proposes the learned model proposing system in which the learned model is proposed.
  • the learned model proposing system that is the invention of any one of the second feature to the fourth feature, an image acquisition unit that acquires an image of the environment to be subjected to the image analysis And the image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar, and when the images are similar, the learned model suggestion means, Propose the learned model.
  • An invention according to a sixth feature is a learned model proposal system that is an invention according to any one of the first feature to the fifth feature,
  • the environment acquisition means provides a learned model proposal system characterized in that an answer input to a presented question is acquired as data relating to the environment.
  • the environment acquisition means is input to the presented question Obtain answers as environmental data.
  • An invention according to a seventh feature is a learned model proposal system that is an invention according to any one of the first feature to the fifth feature,
  • the environment acquisition means provides a learned model proposal system characterized by acquiring data detected by a sensor or a camera.
  • the environment acquisition unit acquires data detected by a sensor or a camera To do.
  • the invention according to the eighth feature is in a system that proposes a trained model suitable for image analysis, A learned model database that stores a learned model for image analysis in association with a purpose and an environment; Obtaining the purpose of image analysis; Obtaining an environment in which an image for that purpose is taken; Referring to the learned model database and proposing a learned model suitable for the purpose and the environment; Provided is a learned model proposal method comprising:
  • the invention according to the ninth feature is In a learned model proposal system having a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment, Obtaining the purpose of image analysis; Obtaining an environment in which an image for that purpose is taken; Referring to the learned model database and proposing a learned model suitable for the purpose and the environment; Provide a program to execute.
  • the present invention by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions.
  • FIG. 3 is a flowchart of the learned model proposal process.
  • FIG. 4 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions when performing image comparison.
  • FIG. 5 is a flowchart of learned model proposal processing when image comparison is performed.
  • FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • FIG. 8 is an example of a learned model proposal screen.
  • FIG. 9 is an example of the configuration of the learned model database.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described with reference to FIG.
  • the learned model proposal system includes a camera 100, a computer 200, and a communication network 300.
  • the number of cameras 100 is not limited to one and may be plural.
  • the computer 200 is not limited to a real device, and may be a virtual device.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240, as shown in FIG.
  • the storage unit 230 includes a learned model database 23.
  • the control unit 210 implements the purpose acquisition module 211 and the environment acquisition module 212 in cooperation with the communication unit 220, the storage unit 230, and the input / output unit 240.
  • the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device including an imaging device such as an imaging element and a lens that can perform data communication with the computer 200, and captures an image to be analyzed.
  • a WEB camera is illustrated as an example, but an imaging apparatus having necessary functions such as a digital camera, a digital video, a camera installed in a drone, a wearable device camera, a security camera, an in-vehicle camera, and a 360-degree camera. It may be.
  • the captured image may be stored in the storage unit 130.
  • the camera 100 may be a stereo camera, in which case the distance to the subject group can be measured.
  • the camera 100 may be provided with a light intensity sensor, and in that case, the ambient light intensity can be measured.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
  • step S01 a plurality of learned models are stored in the learned model database 23 of the computer 200 (step S01).
  • the learned model may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, this step S01 can be omitted when a sufficiently learned model is already stored in the learned model database 23.
  • FIG. 9 shows an example of the configuration of the learned model database.
  • the learned model refers to predetermined learning data (teacher data) composed of past images and correct data, a learned classifier learned from the predetermined learning data, and a learned convolutional neural network ( And machine learning methods such as Convolutional (Neural Network: CNN).
  • a conversion method for converting an image into a feature vector the conversion method is also included in the learned model together with the machine learning method.
  • the learned model database 23 for each learned model, the purpose of image analysis and the environment in which the image was captured are stored in association with each other.
  • detecting the entry of a suspicious person for example, detecting the entry of a suspicious person (suspicious person detection), appropriately detecting the harvest time of crops (crop detection), and detecting the occurrence of pests ( Pest detection) is considered.
  • conditions such as location, size, camera position, lighting, etc. can be considered. For example, for location, whether indoors, outdoors (city), outdoors (farm), for area, how many square meters or how ha, for camera position, in the corner of the ceiling, in the center of the ceiling, on the desk / shelf
  • the lighting it is possible to select whether it is a fluorescent lamp, an LED, natural light, or not, etc.
  • the purpose acquisition module 211 of the computer 200 acquires the purpose of what the image analysis is desired to be performed for (Step S02).
  • the purpose may be transmitted from the camera 100 and acquired, or the user may input using the input / output unit 240 of the computer 200. Or may be obtained by causing the user to input via another terminal (not shown).
  • the environment acquisition module 212 of the computer 200 acquires an environment for capturing an image to be subjected to image analysis (step S03).
  • the environment may be transmitted from the camera 100 as illustrated in FIG. 1 to acquire the environment, or the user may input using the input / output unit 240 of the computer 200. Or may be obtained by causing the user to input via another terminal (not shown).
  • FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the purpose of the image analysis and the question about the shooting environment of the image are displayed to the user, and the user is selected or input to specify the purpose and environment.
  • the example of FIG. 6 shows an example in which suspicious person detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an indoor, outdoor (city), outdoor (farm) is presented for a place, an indoor is selected, and an area is input as 20 square meters.
  • the ceiling corner, the center of the ceiling, the top of the desk / shelf were presented, and the center of the ceiling was selected.
  • the fluorescent lamp, LED, and natural light were presented and the LED was selected.
  • An example is shown.
  • the search button 601 the input to the question is confirmed, and the purpose acquisition module 211 and the environment acquisition module 212 complete the acquisition.
  • questions about the purpose of image analysis and the shooting environment of an image are presented on one screen, but different screens may be used.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • the example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment for example, indoor, outdoor (city), and outdoor (farm) are presented for the location, and the outdoor (farm) is selected.
  • 5ha is input for the area.
  • the camera position an example in which the outside of the building, the utility pole, and the drone are presented and the utility pole is selected is shown. For the lighting, none and yes are presented, and no is selected.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • the example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment for example, indoor, outdoor (city), and outdoor (farm) are presented
  • FIG. 7 shows an example in which the options presented as the image capturing environment are changed and displayed because the purpose of image analysis is different from the example of FIG. In this way, by changing the shooting environment options in accordance with the selected items such as the purpose of image analysis and the location, the user can more easily input according to the purpose and location. .
  • the learned model proposal module 241 of the computer 200 refers to the learned model database 23 to determine which purpose of the learned model is the purpose acquired in step S02 and the environment acquired in step S03. It is checked whether it is compatible with the environment, and an appropriate learned model is proposed (step S04).
  • the proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
  • FIG. 8 is an example of a screen for learning model proposal.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the learned model database 23 of FIG. Is retrieved the learned model “Bunruki002” matches. Therefore, here, an example is shown in which “Bunruki002” is proposed as a learned model. As shown in the link 801 in FIG.
  • the download URL may be displayed and the user may select it so that the proposed learned model can be downloaded immediately.
  • the learned model proposal system may be terminated by selecting the end button 802.
  • the screen may return to the question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment shown in FIGS.
  • a learned model whose purpose matches and whose environment is close may be output as a proposal or reference.
  • the present invention by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
  • FIG. 2 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240.
  • the control unit 210 implements the purpose acquisition module 211 and the environment acquisition module 212 in cooperation with the communication unit 220, the storage unit 230, and the input / output unit 240. Further, the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device including an imaging device such as an imaging element and a lens that can perform data communication with the computer 200, and captures an image to be analyzed.
  • a WEB camera is illustrated as an example, but an imaging apparatus having necessary functions such as a digital camera, a digital video, a camera installed in a drone, a wearable device camera, a security camera, an in-vehicle camera, and a 360-degree camera. It may be.
  • the captured image may be stored in the storage unit 130.
  • the camera 100 may be a stereo camera, in which case the distance to the subject group can be measured.
  • the camera 100 may be provided with a light intensity sensor, and in that case, the ambient light intensity can be measured.
  • the camera 100 includes, as the imaging unit 10, an imaging device such as a lens, an imaging device, various buttons, and a flash, and captures images as captured images such as moving images and still images.
  • An image obtained by imaging is a precise image having an amount of information necessary for image analysis.
  • the control unit 110 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • a device for enabling communication with other devices for example, a WiFi (Wireless Fidelity) compatible device compliant with IEEE 802.11 or an IMT-2000 standard such as a third generation or fourth generation mobile communication system. Compliant wireless device etc. It may be a wired LAN connection.
  • WiFi Wireless Fidelity
  • the storage unit 130 includes a data storage unit such as a hard disk or a semiconductor memory, and stores necessary data such as captured images. The purpose of image analysis and the shooting environment of the image may be stored together.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
  • the control unit 210 includes a CPU, RAM, ROM, and the like.
  • the control unit 210 realizes the purpose acquisition module 211 and the environment acquisition also by the communication unit 220, the storage unit 230, and the input / output unit 240.
  • a device for enabling communication with other devices as the communication unit 220 for example, a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • the storage unit 230 includes a data storage unit using a hard disk or a semiconductor memory, and stores data necessary for processing such as captured images, teacher data, and image analysis results. Further, the storage unit 230 includes a learned model database 23.
  • the input / output unit 240 has functions necessary to use the learned model proposal system.
  • the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230.
  • a liquid crystal display that realizes a touch panel function, a keyboard, a mouse, a pen tablet, a hardware button on the apparatus, a microphone for performing voice recognition, and the like can be provided.
  • forms such as a liquid crystal display, a PC display, a projection on a projector, and an audio output can be considered.
  • the function of the present invention is not particularly limited by the input / output method.
  • FIG. 3 is a flowchart of the learned model proposal process. Processing executed by each module described above will be described in accordance with this processing.
  • a plurality of learned models are stored in the learned model database 23 of the computer 200 (step S301).
  • the learned model may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, this step S301 can be omitted when a sufficiently learned model is already stored in the learned model database 23.
  • FIG. 9 shows an example of the configuration of the learned model database.
  • the learned model refers to predetermined learning data (teacher data) composed of past images and correct data, a learned classifier learned from the predetermined learning data, a learned convolutional neural network, and the like And machine learning techniques.
  • the conversion method is also included in the learned model together with the machine learning method.
  • the learned model database 23 for each learned model, the purpose of image analysis and the environment in which the image was captured are stored in association with each other.
  • detecting the entry of a suspicious person for example, detecting the entry of a suspicious person (suspicious person detection), appropriately detecting the harvest time of crops (crop detection), and detecting the occurrence of pests ( Pest detection) is considered.
  • conditions such as location, size, camera position, lighting, etc. can be considered. For example, for location, whether indoors, outdoors (city), outdoors (farm), for area, how many square meters or how ha, for camera position, in the corner of the ceiling, in the center of the ceiling, on the desk / shelf
  • the lighting it is possible to select whether it is a fluorescent lamp, an LED, natural light, or not, etc.
  • machine learning is performed using predetermined learning data (teacher data) composed of past images and correct answer data.
  • the machine learning method used here is preferably suitable for image analysis.
  • Machine learning techniques include convolutional neural networks (CNN), perceptrons, recurrent neural networks (RNN), residual networks (ResNet), and other neural networks, support vector machines (SVN), and naive Bayes classifiers. It is done.
  • Examples of the conversion method for converting an image into a feature vector include Bug of Visual Words, HOG (Histgram of Oriented Gradients), ORB, SURF, and the like.
  • the learned model is stored in the learned model database 23, predetermined learning data composed of past images and correct answer data, a learned classifier learned from the predetermined learning data, and a learned classifier And machine learning techniques such as convolutional neural networks.
  • the conversion method is also included in the learned model together with the machine learning method.
  • a plurality of machine learning methods may be tried for certain learning data, and only the method having the best image analysis result may be stored in the learned model database 23.
  • a learned model is created by the computer 200, it takes time for learning. Therefore, it is performed when sufficient time and CPU can be spent for learning, such as before operation of the learned model proposal system. It is desirable.
  • the purpose acquisition module 211 of the computer 200 transmits a purpose transmission request in order to acquire the purpose of what the image analysis is to be performed for (Step S302).
  • FIG. 3 illustrates a case where the purpose is acquired by transmitting the purpose from the camera 100 as a method for acquiring the purpose.
  • the destination for transmitting the target transmission request may be another terminal or the like (not shown). Further, instead of transmitting the purpose transmission request, a question may be presented to the input / output unit 240 of the computer 200.
  • the camera 100 receives the purpose transmission request from the computer 200 and transmits the purpose through the communication unit 120 (step S303).
  • the destination to which the purpose transmission request is transmitted is another terminal or the like (not shown)
  • the other terminal transmits the purpose to the computer 200.
  • the timing when the user confirms the input to the question corresponds to the purpose transmission.
  • the purpose acquisition module 211 of the computer 200 acquires the purpose (step S304).
  • the acquisition source may be the camera 100, another terminal (not shown), or the input / output unit 240 of the computer 200 in accordance with the purpose transmission request destination in step S ⁇ b> 302.
  • the environment acquisition module 212 of the computer 200 transmits an environment transmission request in order to acquire an environment for capturing an image to be subjected to image analysis (step S305).
  • FIG. 3 illustrates a case where an environment is transmitted from the camera 100 and acquired as an environment acquisition method.
  • the destination for transmitting the target transmission request may be another terminal or the like (not shown). Further, instead of transmitting the environment transmission request, a question may be presented to the input / output unit 240 of the computer 200.
  • the camera 100 receives the environment transmission request from the computer 200 and transmits the environment via the communication unit 120 (step S306).
  • the camera 100 is a stereo camera, the size or the like may be determined by analyzing the distance to the subject and transmitted as an environment.
  • the camera 100 includes a special sensor such as a light intensity sensor, the location, illumination, and the like may be determined from the sensor value and transmitted as an environment.
  • the destination of sending the environment transmission request is another terminal or the like (not shown)
  • the other terminal transmits the environment to the computer 200.
  • the timing when the user confirms the input to the question corresponds to environment transmission.
  • the environment acquisition module 212 of the computer 200 acquires the environment (step S307).
  • the acquisition destination may be the camera 100, another terminal (not shown), or the input / output unit 240 of the computer 200, in accordance with the environment transmission request destination in step S305.
  • the environment acquired by the environment acquisition module 212 may directly acquire the location, size, camera position, illumination, etc.
  • the location is obtained by using it. It may be determined whether the camera is indoors or outdoors, and when the camera is a stereo camera, the distance and area to the subject may be determined using the camera.
  • FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the purpose of the image analysis and the question about the shooting environment of the image are displayed to the user, and the user is selected or input to specify the purpose and environment.
  • the example of FIG. 6 shows an example in which suspicious person detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment indoors, outdoors (city), and outdoors (farm) are presented for places, and indoors are selected. For example, 20 square meters is entered for the area.
  • the ceiling corner, the center of the ceiling, and the desk / shelf are presented and the center of the ceiling is selected.
  • the fluorescent lamp, LED, and natural light are presented and the LED is selected. Is shown.
  • the search button 601 the input to the question is confirmed, and the purpose acquisition module 211 and the environment acquisition module 212 complete the acquisition.
  • questions about the purpose of image analysis and the shooting environment of an image are presented on one screen, but different screens may be used.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • the example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment for example, indoors, outdoors (city), and outdoors (farm) are presented as places, and the outdoors (farm) is selected.
  • the camera position the outside of the building, the utility pole, and the drone are presented to show the example where the utility pole is selected, and the lighting is shown as none, yes, and no is selected.
  • FIG. 7 shows an example in which the options presented as the image capturing environment are changed because the purpose of image analysis is different from the example of FIG.
  • by changing the shooting environment options in accordance with the selected items such as the purpose of image analysis and the location, it is possible to easily perform input according to the purpose and location.
  • the learned model proposal module 241 of the computer 200 refers to the learned model database 23 to determine which purpose of the learned model is the purpose acquired in step S304 and the environment acquired in step S307. It searches for suitability with the environment and proposes an appropriate learned model (step S308).
  • the proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
  • FIG. 8 is an example of a screen for learning model proposal.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the learned model database 23 of FIG. and “Bunruki002” is proposed as a suitable learned model.
  • a download URL may be indicated and the proposed learned model may be immediately downloaded by selecting it.
  • the learned model suggestion system may be terminated by selecting the end button 802. Further, by selecting the button 801 to the search screen, the screen may return to the question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment shown in FIGS. In addition, when a suitable learned model is not found, a learned model whose purpose matches and whose environment is close may be output as a proposal or reference. By using the proposed learned model, the user can obtain an accurate image analysis result suitable for the purpose without spending learning time.
  • the present invention by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
  • FIG. 4 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions when performing image comparison.
  • the control unit 210 of the computer 200 implements the image acquisition module 213 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements the image comparison module 214 in cooperation with the storage unit 230.
  • FIG. 5 is a flowchart of learned model proposal processing when image comparison is performed. Processing executed by each module described above will be described in accordance with this processing. Since the processing from step S501 to step S507 in FIG. 5 corresponds to the processing from step S301 to step S307 in FIG. 3, only step S508 and subsequent steps will be described. Note that the processing in step S501 can be omitted when a sufficiently learned model is already stored in the learned model database 23, as in step S301.
  • the image acquisition module 213 of the computer 200 transmits an image transmission request to the camera 100 in order to acquire an image to be subjected to image analysis (step S508).
  • the camera 100 receives an image transmission request from the computer 200 and transmits an image via the communication unit 120 (step S509).
  • the camera 100 may not only transmit an image captured in real time but also transmit an image captured by the camera 100 in the past and stored in the storage unit 130.
  • the image acquisition module 213 of the computer 200 acquires an image to be analyzed from the camera 100 (step S510).
  • the environment acquisition module 212 analyzes the image acquired in step S510, and analyzes the location, area, and size. The camera position, lighting, etc. may be determined.
  • the image comparison module 214 of the computer 200 obtains the image to be analyzed in step S510 and the image of the predetermined learning data (teacher data) of the learned model stored in the learned model database 23. Are compared (step S511).
  • the image of the predetermined learning data of the learned model used for comparison not all the images of one learned model may be used, but one or a plurality of images may be picked up and used.
  • the image to be analyzed is similar to the image of the predetermined learning data of the learned model, the accuracy when performing the image analysis using the learned model is improved, Shall be proposed.
  • this comparison work may be performed to confirm that the image is similar to the image of the learned model that matches the purpose and the environment, and when there are multiple learned models that match the purpose and the environment, This may be done to suggest or refine a completed model, or it may be used to select a learned model to be proposed from a learned model that matches only the purpose when there is no learned model that matches the purpose and environment. Good. When a suitable learned model is not found, there may be no suitable learned model, or a learned model that matches the purpose and has the closest environment may be proposed.
  • the learned model proposal module 241 of the computer 200 proposes an appropriate learned model based on the image comparison result in step S511 (step S512).
  • the proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
  • FIG. 8 shows an example of the learned model proposal screen as described above.
  • an existing learned model similar to that is proposed by acquiring and using the image to be analyzed.
  • the means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program.
  • the program may be, for example, in the form (SaaS: Software as a Service) provided from a computer via a network, or a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD). -RAM, etc.) and a computer-readable recording medium such as a compact memory.
  • the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

[Problem] To provide a trained model suggestion system, a trained model suggestion method, and a program which suggest a trained model capable of outputting accurate image analysis results of an image desired to be newly subjected to image analysis, without spending time on learning. [Solution] This trained model suggestion system comprises: a trained model database 23 which associates and stores trained models for image analysis with purposes and environments; a purpose ascertaining module 211 which ascertains the purpose for which an image is to be analyzed; an environment ascertaining module 212 which ascertains the environment in which the image is captured; and a trained model suggestion module 241 which refers to the trained model database 23 and suggests a trained model matching the purpose and the environment.

Description

学習済モデル提案システム、学習済モデル提案方法、およびプログラムLearned model proposal system, learned model proposal method, and program
 本発明は、画像の解析の目的と画像が撮影される環境とを取得して利用することで、それに似た既存の学習済モデルを提案することが可能な学習済モデル提案システム、学習済モデル提案方法、およびプログラムに関する。 The present invention relates to a learned model proposal system, a learned model, which can propose an existing learned model similar to that by acquiring and using the purpose of image analysis and the environment in which the image is captured. The present invention relates to a proposed method and a program.
 人物画像に対して画像解析処理を行うことで、誰が写っているかを判別し、自動的に人物画像をカテゴライズする仕組みを提供する方法が提案されている(特許文献1)。 There has been proposed a method of providing a mechanism for discriminating who is captured by performing image analysis processing on a person image and automatically categorizing the person image (Patent Document 1).
 また、人工知能が画像解析を行うための機械学習の手法として、教師あり学習(Supervised Learning)はよく知られる手法であり、目的にあわせた学習済モデルの生成方法も提案されている(特許文献2)。 As a machine learning technique for artificial intelligence to perform image analysis, supervised learning is a well-known technique, and a method for generating a learned model suitable for the purpose has also been proposed (Patent Literature). 2).
特開2015-69580JP2015-69580A 特許6216024Patent 6216024
 しかしながら、画像解析のために教師あり学習を行う場合、一般的に数万枚~数百万枚以上の大量の画像を用意して、画像に対して正しい教師データを付加してから、画像を分類するための分類器やニューラルネットワーク等に学習させて学習済モデルを作成する必要がある。複数のカメラについて、それぞれ最適な学習済モデルを作成するためには、学習のための画像を準備する手間がかかるとともに、学習のための時間も長期間必要となるという点が問題となる。 However, when supervised learning is performed for image analysis, a large number of images, typically tens of millions to millions or more, are prepared, the correct teacher data is added to the images, It is necessary to create a learned model by learning a classifier or a neural network for classification. In order to create an optimal learned model for each of a plurality of cameras, it takes time to prepare an image for learning, and it takes a long time for learning.
 この課題に対して、本発明者は、画像の解析の目的と画像が撮影される環境とが適合するのであれば、既存の学習済モデルを利用し、学習時間をかけずに、精度の良い画像解析結果を得ることができるのではないかという点に着目した。 In response to this problem, the present inventor uses an existing learned model and has high accuracy without spending learning time if the purpose of image analysis and the environment in which the image is captured are compatible. We focused on the possibility of obtaining image analysis results.
 本発明は、画像の解析の目的と画像が撮影される環境とを取得して利用することで、それに似た既存の学習済モデルを提案し、学習時間をかけずに、精度の良い画像解析結果を得ることが可能な学習済モデル提案システム、学習済モデル提案方法、およびプログラムを提供することを目的とする。 The present invention proposes an existing learned model similar to that by acquiring and using the purpose of image analysis and the environment in which the image is captured, and provides accurate image analysis without taking up learning time. It is an object of the present invention to provide a learned model proposal system, a learned model proposal method, and a program capable of obtaining results.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、
 画像解析に適切な学習済モデルを提案するシステムであって、
 画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースと、
 画像解析の目的を取得する目的取得手段と、
 当該目的のための画像が撮影される環境を取得する環境取得手段と、
 前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案する学習済モデル提案手段と、
 を備えることを特徴とする学習済モデル提案システムを提供する。
The invention according to the first feature is
A system that proposes a trained model suitable for image analysis,
A learned model database that stores a learned model for image analysis in association with a purpose and an environment;
Purpose acquisition means for acquiring the purpose of image analysis;
An environment acquisition means for acquiring an environment in which an image for the purpose is captured;
With reference to the learned model database, a learned model proposing means for proposing a learned model suitable for the purpose and the environment;
Provided is a learned model proposal system characterized by comprising:
 第1の特徴に係る発明によれば、画像解析に適切な学習済モデルを提案するシステムにおいて、画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースと、画像解析の目的を取得する目的取得手段と、当該目的のための画像が撮影される環境を取得する環境取得手段と、前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案する学習済モデル提案手段と、を備える。 According to the first aspect of the invention, in a system that proposes a learned model suitable for image analysis, a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment; A purpose acquisition means for acquiring the purpose of image analysis, an environment acquisition means for acquiring an environment in which an image for the purpose is photographed, and learning adapted to the purpose and the environment with reference to the learned model database Learned model proposing means for proposing a completed model.
 第1の特徴に係る発明は、学習済モデル提案システムのカテゴリであるが、学習済モデル提案方法、およびプログラムであっても同様の作用、効果を奏する。 The invention according to the first feature is a category of the learned model suggestion system, but the learned model suggestion method and program have the same actions and effects.
 第2の特徴に係る発明は、第1の特徴に係る発明である学習済モデル提案システムであって、
 前記学習済モデルは、過去の画像と正解データとからなる所定の学習データで学習した学習済みの分類器を含み、
 前記学習済モデル提案手段は、前記学習済みの分類器を学習済モデルとして提案することを特徴とする学習済モデル提案システムを提供する。
The invention according to the second feature is a learned model proposal system that is the invention according to the first feature,
The learned model includes a learned classifier trained with predetermined learning data consisting of past images and correct answer data,
The learned model proposal means provides a learned model proposal system that proposes the learned classifier as a learned model.
 第2の特徴に係る発明によれば、第1の特徴に係る発明である学習済モデル提案システムにおいて、前記学習済モデルは、過去の画像と正解データとからなる所定の学習データで学習した学習済みの分類器を含み、前記学習済モデル提案手段は、前記学習済みの分類器を学習済モデルとして提案する。 According to the second aspect of the invention, in the learned model proposal system according to the first aspect of the invention, the learned model is learned by learning with predetermined learning data including past images and correct data. The learned model proposing means proposes the learned classifier as a learned model.
 第3の特徴に係る発明は、第2の特徴に係る発明である学習済モデル提案システムであって、
 前記学習済モデルは、画像を分類器で分類する場合の分類器の種類と、画像を特徴ベクトルへ変換する変換方法と、からなることを特徴とする学習済モデル提案システムを提供する。
The invention according to the third feature is a learned model proposal system which is the invention according to the second feature,
The learned model is provided with a learned model suggestion system comprising: a classifier type when an image is classified by a classifier; and a conversion method for converting the image into a feature vector.
 第3の特徴に係る発明によれば、第2の特徴に係る発明である学習済モデル提案システムにおいて、前記学習済モデルは、画像を分類器で分類する場合の分類器の種類と、画像を特徴ベクトルへ変換する変換方法と、からなる。 According to the third aspect of the invention, in the learned model suggestion system according to the second aspect of the invention, the learned model includes a classifier type when an image is classified by a classifier, and an image. A conversion method for converting into a feature vector.
 第4の特徴に係る発明は、第1の特徴に係る発明である学習済モデル提案システムであって、
 前記学習済モデルが、過去の画像と正解データとからなる所定の学習データで学習した学習済みの畳み込みニューラルネットワークであることを特徴とする学習済モデル提案システムを提供する。
The invention according to the fourth feature is a learned model proposal system which is the invention according to the first feature,
Provided is a learned model suggestion system, wherein the learned model is a learned convolutional neural network learned with predetermined learning data including past images and correct answer data.
 第4の特徴に係る発明によれば、第1の特徴に係る発明である学習済モデル提案システムにおいて、前記学習済モデルが、過去の画像と正解データとからなる所定の学習データで学習した学習済みの畳み込みニューラルネットワークである。 According to the invention related to the fourth feature, in the learned model proposal system that is the invention related to the first feature, the learned model learns with predetermined learning data composed of past images and correct data. This is a convolutional neural network.
 第5の特徴に係る発明は、第2の特徴から第4の特徴のいずれかに係る発明である学習済モデル提案システムであって、
 前記画像解析を行いたい環境の画像を取得する画像取得手段と、
 取得した前記画像と、前記所定の学習データの画像とが類似か否かを決定する画像比較手段と、を備え、
 前記画像が類似している場合に、前記学習済モデル提案手段が、前記学習済モデルを提案することを特徴とする学習済モデル提案システムを提供する。
The invention according to the fifth feature is a learned model proposing system that is an invention according to any one of the second feature to the fourth feature,
Image acquisition means for acquiring an image of an environment to be subjected to the image analysis;
Image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar,
When the images are similar, the learned model proposing means proposes the learned model proposing system in which the learned model is proposed.
 第5の特徴に係る発明によれば、第2の特徴から第4の特徴のいずれかに係る発明である学習済モデル提案システムにおいて、前記画像解析を行いたい環境の画像を取得する画像取得手段と、取得した前記画像と、前記所定の学習データの画像とが類似か否かを決定する画像比較手段と、を備え、前記画像が類似している場合に、前記学習済モデル提案手段が、前記学習済モデルを提案する。 According to the fifth feature of the invention, in the learned model proposing system that is the invention of any one of the second feature to the fourth feature, an image acquisition unit that acquires an image of the environment to be subjected to the image analysis And the image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar, and when the images are similar, the learned model suggestion means, Propose the learned model.
 第6の特徴に係る発明は、第1の特徴から第5の特徴のいずれかに係る発明である学習済モデル提案システムであって、
 前記環境取得手段は、提示した質問に対して入力された回答を環境に関するデータとして取得することを特徴とする学習済モデル提案システムを提供する。
An invention according to a sixth feature is a learned model proposal system that is an invention according to any one of the first feature to the fifth feature,
The environment acquisition means provides a learned model proposal system characterized in that an answer input to a presented question is acquired as data relating to the environment.
 第6の特徴に係る発明によれば、第1の特徴から第5の特徴のいずれかに係る発明である学習済モデル提案システムにおいて、前記環境取得手段は、提示した質問に対して入力された回答を環境に関するデータとして取得する。 According to the sixth aspect of the invention, in the learned model suggestion system according to any one of the first to fifth aspects, the environment acquisition means is input to the presented question Obtain answers as environmental data.
 第7の特徴に係る発明は、第1の特徴から第5の特徴のいずれかに係る発明である学習済モデル提案システムであって、
 前記環境取得手段は、センサ又はカメラで検知したデータを取得することを特徴とする学習済モデル提案システムを提供する。
An invention according to a seventh feature is a learned model proposal system that is an invention according to any one of the first feature to the fifth feature,
The environment acquisition means provides a learned model proposal system characterized by acquiring data detected by a sensor or a camera.
 第7の特徴に係る発明によれば、第1の特徴から第5の特徴のいずれかに係る発明である学習済モデル提案システムにおいて、前記環境取得手段は、センサ又はカメラで検知したデータを取得する。 According to the seventh aspect of the invention, in the learned model suggestion system according to any one of the first to fifth aspects, the environment acquisition unit acquires data detected by a sensor or a camera To do.
 第8の特徴に係る発明は、
 画像解析に適切な学習済モデルを提案するシステムに、
 画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースと、
 画像解析の目的を取得するステップと、
 当該目的のための画像が撮影される環境を取得するステップと、
 前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案するステップと、
を備える学習済モデル提案方法を提供する。
The invention according to the eighth feature is
In a system that proposes a trained model suitable for image analysis,
A learned model database that stores a learned model for image analysis in association with a purpose and an environment;
Obtaining the purpose of image analysis;
Obtaining an environment in which an image for that purpose is taken;
Referring to the learned model database and proposing a learned model suitable for the purpose and the environment;
Provided is a learned model proposal method comprising:
 第9の特徴に係る発明は、
 画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースを備える学習済モデル提案システムに、
 画像解析の目的を取得するステップ、
 当該目的のための画像が撮影される環境を取得するステップ、
 前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案するステップ、
を実行させるためのプログラムを提供する。
The invention according to the ninth feature is
In a learned model proposal system having a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment,
Obtaining the purpose of image analysis;
Obtaining an environment in which an image for that purpose is taken;
Referring to the learned model database and proposing a learned model suitable for the purpose and the environment;
Provide a program to execute.
 本発明によれば、画像の解析の目的と画像が撮影される環境とを取得して利用することで、それに似た既存の学習済モデルを提案し、学習時間をかけずに、精度の良い画像解析結果を得ることが可能な学習済モデル提案システム、学習済モデル提案方法、およびプログラムを提供することが可能となる。 According to the present invention, by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
図1は、本発明の好適な実施形態の概要図である。FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. 図2は、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 2 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions. 図3は、学習済モデル提案処理のフローチャート図である。FIG. 3 is a flowchart of the learned model proposal process. 図4は、画像比較を行う場合の、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。FIG. 4 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions when performing image comparison. 図5は、画像比較を行う場合の、学習済モデル提案処理のフローチャート図である。FIG. 5 is a flowchart of learned model proposal processing when image comparison is performed. 図6は、画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面の一例である。FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment. 図7は、画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面の別の一例である。FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment. 図8は、学習済モデル提案の画面の一例である。FIG. 8 is an example of a learned model proposal screen. 図9は、学習済モデルデータベースの構成の一例である。FIG. 9 is an example of the configuration of the learned model database.
 以下、本発明を実施するための最良の形態について図を参照しながら説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 Hereinafter, the best mode for carrying out the present invention will be described with reference to the drawings. This is merely an example, and the technical scope of the present invention is not limited to this.
 [学習済モデル提案システムの概要]
 図1は、本発明の好適な実施形態の概要図である。この図1に基づいて、本発明の概要を説明する。学習済モデル提案システムは、カメラ100、コンピュータ200、通信網300から構成される。
[Outline of learned model proposal system]
FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described with reference to FIG. The learned model proposal system includes a camera 100, a computer 200, and a communication network 300.
 なお、図1において、カメラ100の数は1つに限らず複数であってもよい。また、コンピュータ200は、実在する装置に限らず、仮想的な装置であってもよい。 In FIG. 1, the number of cameras 100 is not limited to one and may be plural. Further, the computer 200 is not limited to a real device, and may be a virtual device.
 カメラ100は、図2に示すように、撮像部10、制御部110、通信部120、記憶部130から構成される。また、コンピュータ200は、同じく図2に示すように、制御部210、通信部220、記憶部230、入出力部240から構成される。記憶部230には、学習済モデルデータベース23を備える。制御部210は通信部220、記憶部230、入出力部240と協働して目的取得モジュール211、環境取得モジュール212を実現する。また、入出力部240は、制御部210、記憶部230と協働して、学習済モデル提案モジュール241を実現する。通信網300は、インターネット等の公衆通信網でも専用通信網でもよく、カメラ100とコンピュータ200間の通信を可能とする。 As shown in FIG. 2, the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130. The computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240, as shown in FIG. The storage unit 230 includes a learned model database 23. The control unit 210 implements the purpose acquisition module 211 and the environment acquisition module 212 in cooperation with the communication unit 220, the storage unit 230, and the input / output unit 240. Also, the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230. The communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
 カメラ100は、コンピュータ200とデータ通信可能な、撮像素子やレンズ等の撮像デバイスを備える撮像装置であり、画像解析を行いたい画像を撮影する。ここでは、例としてWEBカメラを図示しているが、デジタルカメラ、デジタルビデオ、ドローンに搭載したカメラ、ウェアラブルデバイスのカメラ、防犯カメラ、車載カメラ、360度カメラ等の必要な機能を備える撮像装置であってよい。また、記憶部130に撮像画像を保存可能としても良い。また、カメラ100はステレオカメラであってもよく、その場合には被写体群との距離を測定することが可能となる。また、カメラ100は光度センサを備えてもよく、その場合には周囲の光度を測定することが可能となる。 The camera 100 is an imaging device including an imaging device such as an imaging element and a lens that can perform data communication with the computer 200, and captures an image to be analyzed. Here, a WEB camera is illustrated as an example, but an imaging apparatus having necessary functions such as a digital camera, a digital video, a camera installed in a drone, a wearable device camera, a security camera, an in-vehicle camera, and a 360-degree camera. It may be. The captured image may be stored in the storage unit 130. Further, the camera 100 may be a stereo camera, in which case the distance to the subject group can be measured. Further, the camera 100 may be provided with a light intensity sensor, and in that case, the ambient light intensity can be measured.
 コンピュータ200は、カメラ100とデータ通信可能な計算装置である。ここでは、例としてデスクトップ型のコンピュータを図示しているが、携帯電話、携帯情報端末、タブレット端末、パーソナルコンピュータに加え、ネットブック端末、スレート端末、電子書籍端末、携帯型音楽プレーヤ等の電化製品や、スマートグラス、ヘッドマウントディスプレイ等のウェアラブル端末等であってよい。 The computer 200 is a computing device capable of data communication with the camera 100. Here, a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
 図1の学習済モデル提案システムにおいて、まず、コンピュータ200の学習済モデルデータベース23に、複数の学習済モデルを記憶する(ステップS01)。学習済モデルは、他のコンピュータや記憶媒体から取得しても良いし、コンピュータ200で作成しても良い。また、このステップS01は、既に学習済モデルデータベース23に十分な学習済モデルが記憶されている場合には、省略可能である。 1, first, a plurality of learned models are stored in the learned model database 23 of the computer 200 (step S01). The learned model may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, this step S01 can be omitted when a sufficiently learned model is already stored in the learned model database 23.
 図9は、学習済モデルデータベースの構成の一例である。本発明において、学習済モデルとは、過去の画像と正解データとからなる所定の学習データ(教師データ)と、その所定の学習データで学習した学習済みの分類器や学習済みの畳み込みニューラルネットワーク(Convoltional Neural Network:CNN)等の機械学習の手法とを含むものとする。画像を特徴ベクトルへと変換するための変換方法が存在する場合には、その変換方法も機械学習の手法とあわせて、学習済モデルに含めるものとする。また、学習済モデルデータベース23には、それぞれの学習済モデルについて、画像解析の目的と、画像を撮影した環境とを関連付けて保存するものとする。ここで、画像解析の目的としては、例えば、不審者の進入を検知すること(不審者検知)、農作物の収穫時期を適切に検知すること(収穫物検知)、害虫の発生を検知すること(害虫検知)等が、考えられる。また、画像が撮影される環境については、場所、広さ、カメラ位置、照明、等が条件として考えられる。例えば、場所については、屋内か、屋外(街)か、屋外(農場)か、広さについては、何平方メートルか何haか、カメラ位置については、天井隅か、天井中央か、机・棚の上か、建物外部か、電柱か、ドローンか、照明については、蛍光灯か、LEDか、自然光か、なしか、ありか、等が選択肢として考えられる。これらの画像解析の目的と、画像を撮影した環境の情報を利用することで、目的と環境が同じ学習済モデルを提案することが可能となる。 FIG. 9 shows an example of the configuration of the learned model database. In the present invention, the learned model refers to predetermined learning data (teacher data) composed of past images and correct data, a learned classifier learned from the predetermined learning data, and a learned convolutional neural network ( And machine learning methods such as Convolutional (Neural Network: CNN). When there is a conversion method for converting an image into a feature vector, the conversion method is also included in the learned model together with the machine learning method. In the learned model database 23, for each learned model, the purpose of image analysis and the environment in which the image was captured are stored in association with each other. Here, for the purpose of image analysis, for example, detecting the entry of a suspicious person (suspicious person detection), appropriately detecting the harvest time of crops (crop detection), and detecting the occurrence of pests ( Pest detection) is considered. In addition, regarding the environment in which an image is taken, conditions such as location, size, camera position, lighting, etc. can be considered. For example, for location, whether indoors, outdoors (city), outdoors (farm), for area, how many square meters or how ha, for camera position, in the corner of the ceiling, in the center of the ceiling, on the desk / shelf As for the top, the outside of the building, the utility pole, the drone, the lighting, it is possible to select whether it is a fluorescent lamp, an LED, natural light, or not, etc. By using the purpose of the image analysis and the information of the environment where the image is captured, it is possible to propose a learned model having the same purpose and environment.
 図1に戻り、コンピュータ200の目的取得モジュール211は、何のために画像解析を行いたいのかという目的を取得する(ステップS02)。ここで、目的取得の方法として、図1に図示しているようにカメラ100から目的を送信させてそれを取得してもよいし、コンピュータ200の入出力部240を利用してユーザに入力させることで取得してもよいし、別の端末等(非図示)を介してユーザに入力させることで取得してもよい。 Returning to FIG. 1, the purpose acquisition module 211 of the computer 200 acquires the purpose of what the image analysis is desired to be performed for (Step S02). Here, as a method of acquiring the purpose, as shown in FIG. 1, the purpose may be transmitted from the camera 100 and acquired, or the user may input using the input / output unit 240 of the computer 200. Or may be obtained by causing the user to input via another terminal (not shown).
 次に、コンピュータ200の環境取得モジュール212は、画像解析を行いたい画像を撮影する環境を取得する(ステップS03)。ここで、環境取得の方法として、図1に図示しているようにカメラ100から環境を送信させてそれを取得してもよいし、コンピュータ200の入出力部240を利用してユーザに入力させることで取得してもよいし、別の端末等(非図示)を介してユーザに入力させることで取得してもよい。 Next, the environment acquisition module 212 of the computer 200 acquires an environment for capturing an image to be subjected to image analysis (step S03). Here, as an environment acquisition method, the environment may be transmitted from the camera 100 as illustrated in FIG. 1 to acquire the environment, or the user may input using the input / output unit 240 of the computer 200. Or may be obtained by causing the user to input via another terminal (not shown).
 図6は、画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面の一例である。この画面は、コンピュータ200の入出力部240に表示してもよいし、ユーザの使用する別の端末(非図示)の入出力部に表示してもよい。ユーザに対して、画像解析の目的と、画像の撮影環境についての設問を表示し、ユーザに選択又は入力させることで、目的と環境を特定する。図6の例では、画像解析の目的として、不審者検知、収穫物検知、害虫検知を提示して、不審者検知が選択された例を示している。また、画像の撮影環境として、場所については、屋内、屋外(街)、屋外(農場)を提示して、屋内が選択された例を、広さについては、20平方メートルと入力された例を、カメラ位置については、天井隅、天井中央、机・棚の上を提示して、天井中央が選択された例を、照明については、蛍光灯、LED、自然光を提示して、LEDが選択された例を示している。ここで、検索ボタン601を選択することで、質問に対する入力を確定とし、目的取得モジュール211と環境取得モジュール212が、取得を完了するものとする。ここでは、画像解析の目的と画像の撮影環境とについての質問提示を一画面で行う場合の例を示したが、それぞれ別の画面としてもよい。 FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment. This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user. The purpose of the image analysis and the question about the shooting environment of the image are displayed to the user, and the user is selected or input to specify the purpose and environment. The example of FIG. 6 shows an example in which suspicious person detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis. In addition, as an image capturing environment, an indoor, outdoor (city), outdoor (farm) is presented for a place, an indoor is selected, and an area is input as 20 square meters. For the camera position, the ceiling corner, the center of the ceiling, the top of the desk / shelf were presented, and the center of the ceiling was selected. For lighting, the fluorescent lamp, LED, and natural light were presented and the LED was selected. An example is shown. Here, by selecting the search button 601, the input to the question is confirmed, and the purpose acquisition module 211 and the environment acquisition module 212 complete the acquisition. Here, an example has been shown in which questions about the purpose of image analysis and the shooting environment of an image are presented on one screen, but different screens may be used.
 また、図7は、画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面の別の一例である。図7の例では、画像解析の目的として、不審者検知、収穫物検知、害虫検知を提示して、収穫物検知が選択された例を示している。また、画像の撮影環境として、場所については、屋内、屋外(街)、屋外(農場)を提示して、屋外(農場)が選択された例を、広さについては、5haと入力された例を、カメラ位置については、建物外部、電柱、ドローンを提示して、電柱が選択された例を、照明については、なし、ありを提示して、なしが選択された例を示している。図7では、図6の例と、画像解析の目的が異なるため、画像の撮影環境として提示する選択肢を変更して表示した例を示している。このように、画像解析の目的や、場所等、選択済みの項目に合わせて、撮影環境の選択肢を変化させることで、ユーザが目的や場所にあわせた入力をより容易に行うことが可能となる。 FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment. The example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis. In addition, as an image capturing environment, for example, indoor, outdoor (city), and outdoor (farm) are presented for the location, and the outdoor (farm) is selected. For example, 5ha is input for the area. As for the camera position, an example in which the outside of the building, the utility pole, and the drone are presented and the utility pole is selected is shown. For the lighting, none and yes are presented, and no is selected. FIG. 7 shows an example in which the options presented as the image capturing environment are changed and displayed because the purpose of image analysis is different from the example of FIG. In this way, by changing the shooting environment options in accordance with the selected items such as the purpose of image analysis and the location, the user can more easily input according to the purpose and location. .
 図1に戻り、最後にコンピュータ200の学習済モデル提案モジュール241は、学習済モデルデータベース23を参照し、ステップS02で取得した目的と、ステップS03で取得した環境が、どの学習済モデルの目的および環境と適合するかを調べ、適切な学習済モデルを提案する(ステップS04)。ここでの学習済モデルの提案は、コンピュータ200の入出力部240に出力してもよいし、ユーザが使用する別の端末等(非図示)の入出力部に出力してもよい。 Returning to FIG. 1, finally, the learned model proposal module 241 of the computer 200 refers to the learned model database 23 to determine which purpose of the learned model is the purpose acquired in step S02 and the environment acquired in step S03. It is checked whether it is compatible with the environment, and an appropriate learned model is proposed (step S04). The proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
 図8は、学習済モデル提案の画面の一例である。この画面は、コンピュータ200の入出力部240に表示してもよいし、ユーザの使用する別の端末(非図示)の入出力部に表示してもよい。図6の例の検索結果として、目的が不審者検知、環境が屋内、広さ約20平方メートル、カメラ位置天井中央、照明LEDという条件で検索を行った場合に、図9の学習済モデルデータベース23を検索すると、学習済モデル「Bunruiki002」が適合する。そこでここでは、「Bunruiki002」を学習済モデルとして提案するという例を示している。提案画面で、図8のリンク801に示すように、ダウンロードURLを表示して、それをユーザが選択することで、提案した学習済モデルをすぐにダウンロード可能としてもよい。また、終了ボタン802を選択することで、学習済モデル提案システムを終了してよい。また、検索画面へボタン803を選択することで、図6や図7に示した画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面に戻ってよい。また、適合する学習済モデルが見つからない場合には、目的が合致し、環境が近い学習済モデルを、提案又は参考として出力してもよい。ユーザは、これらの学習済モデルを利用することで、学習時間をかけずに、目的にあわせた精度の良い画像解析結果を得ることが可能となる。 FIG. 8 is an example of a screen for learning model proposal. This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user. As a search result in the example of FIG. 6, when a search is performed on the condition that the purpose is suspicious person detection, the environment is indoors, the area is approximately 20 square meters, the center of the camera position ceiling, and the lighting LED, the learned model database 23 of FIG. Is retrieved, the learned model “Bunruki002” matches. Therefore, here, an example is shown in which “Bunruki002” is proposed as a learned model. As shown in the link 801 in FIG. 8 on the proposal screen, the download URL may be displayed and the user may select it so that the proposed learned model can be downloaded immediately. Further, the learned model proposal system may be terminated by selecting the end button 802. Further, by selecting the button 803 to the search screen, the screen may return to the question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment shown in FIGS. In addition, when a suitable learned model is not found, a learned model whose purpose matches and whose environment is close may be output as a proposal or reference. By using these learned models, the user can obtain an accurate image analysis result suitable for the purpose without spending learning time.
 本発明によれば、画像の解析の目的と画像が撮影される環境とを取得して利用することで、それに似た既存の学習済モデルを提案し、学習時間をかけずに、精度の良い画像解析結果を得ることが可能な学習済モデル提案システム、学習済モデル提案方法、およびプログラムを提供することが可能となる。 According to the present invention, by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
 [各機能の説明]
 図2は、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。カメラ100は、撮像部10、制御部110、通信部120、記憶部130から構成される。また、コンピュータ200は、制御部210、通信部220、記憶部230、入出力部240、から構成される。制御部210は通信部220、記憶部230、入出力部240と協働して目的取得モジュール211、環境取得モジュール212を実現する。また、入出力部240は、制御部210、記憶部230と協働して、学習済モデル提案モジュール241を実現する。通信網300は、インターネット等の公衆通信網でも専用通信網でもよく、カメラ100とコンピュータ200間の通信を可能とする。
[Description of each function]
FIG. 2 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions. The camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130. The computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240. The control unit 210 implements the purpose acquisition module 211 and the environment acquisition module 212 in cooperation with the communication unit 220, the storage unit 230, and the input / output unit 240. Further, the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230. The communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
 カメラ100は、コンピュータ200とデータ通信可能な、撮像素子やレンズ等の撮像デバイスを備える撮像装置であり、画像解析を行いたい画像を撮影する。ここでは、例としてWEBカメラを図示しているが、デジタルカメラ、デジタルビデオ、ドローンに搭載したカメラ、ウェアラブルデバイスのカメラ、防犯カメラ、車載カメラ、360度カメラ等の必要な機能を備える撮像装置であってよい。また、記憶部130に撮像画像を保存可能としても良い。また、カメラ100はステレオカメラであってもよく、その場合には被写体群との距離を測定することが可能となる。また、カメラ100は光度センサを備えてもよく、その場合には周囲の光度を測定することが可能となる。 The camera 100 is an imaging device including an imaging device such as an imaging element and a lens that can perform data communication with the computer 200, and captures an image to be analyzed. Here, a WEB camera is illustrated as an example, but an imaging apparatus having necessary functions such as a digital camera, a digital video, a camera installed in a drone, a wearable device camera, a security camera, an in-vehicle camera, and a 360-degree camera. It may be. The captured image may be stored in the storage unit 130. Further, the camera 100 may be a stereo camera, in which case the distance to the subject group can be measured. Further, the camera 100 may be provided with a light intensity sensor, and in that case, the ambient light intensity can be measured.
 カメラ100は、撮像部10として、レンズ、撮像素子、各種ボタン、フラッシュ等の撮像デバイス等を備え、動画や静止画等の撮像画像として撮像する。また、撮像して得られる画像は、画像解析に必要なだけの情報量を持った精密な画像であるものする。 The camera 100 includes, as the imaging unit 10, an imaging device such as a lens, an imaging device, various buttons, and a flash, and captures images as captured images such as moving images and still images. An image obtained by imaging is a precise image having an amount of information necessary for image analysis.
 制御部110として、CPU(Central Processing Unit)、RAM(Random Access Memory)、ROM(Read Only Memory)等を備える。 The control unit 110 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
 通信部120として、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWiFi(Wireless Fidelity)対応デバイス又は第3世代、第4世代移動通信システム等のIMT-2000規格に準拠した無線デバイス等を備える。有線によるLAN接続であってもよい。 As the communication unit 120, a device for enabling communication with other devices, for example, a WiFi (Wireless Fidelity) compatible device compliant with IEEE 802.11 or an IMT-2000 standard such as a third generation or fourth generation mobile communication system. Compliant wireless device etc. It may be a wired LAN connection.
 記憶部130として、ハードディスクや半導体メモリによる、データのストレージ部を備え、撮像画像等の必要なデータ等を記憶する。画像解析の目的やと画像の撮影環境等も、あわせて記憶してもよい。 The storage unit 130 includes a data storage unit such as a hard disk or a semiconductor memory, and stores necessary data such as captured images. The purpose of image analysis and the shooting environment of the image may be stored together.
 コンピュータ200は、カメラ100とデータ通信可能な計算装置である。ここでは、例としてデスクトップ型のコンピュータを図示しているが、携帯電話、携帯情報端末、タブレット端末、パーソナルコンピュータに加え、ネットブック端末、スレート端末、電子書籍端末、携帯型音楽プレーヤ等の電化製品や、スマートグラス、ヘッドマウントディスプレイ等のウェアラブル端末等であってよい。 The computer 200 is a computing device capable of data communication with the camera 100. Here, a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
 制御部210として、CPU、RAM、ROM等を備える。制御部210は通信部220、記憶部230、入出力部240と協働して目的取得モジュール211、環境取得も寿ユール212を実現する。 The control unit 210 includes a CPU, RAM, ROM, and the like. The control unit 210 realizes the purpose acquisition module 211 and the environment acquisition also by the communication unit 220, the storage unit 230, and the input / output unit 240.
 通信部220として、他の機器と通信可能にするためのデバイス、例えば、IEEE802.11に準拠したWiFi対応デバイス又は第3世代、第4世代移動通信システム等のIMT-2000規格に準拠した無線デバイス等を備える。有線によるLAN接続であってもよい。また、この通信部220を介して、必要に応じてユーザの使用する別の端末(非図示)との通信を行うものとする。 A device for enabling communication with other devices as the communication unit 220, for example, a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection. In addition, it is assumed that communication with another terminal (not shown) used by the user is performed via the communication unit 220 as necessary.
 記憶部230として、ハードディスクや半導体メモリによる、データのストレージ部を備え、撮像画像や、教師データ、画像解析結果、等の処理に必要なデータ等を記憶する。また、記憶部230に、学習済モデルデータベース23を備える。 The storage unit 230 includes a data storage unit using a hard disk or a semiconductor memory, and stores data necessary for processing such as captured images, teacher data, and image analysis results. Further, the storage unit 230 includes a learned model database 23.
 入出力部240は、学習済モデル提案システムを利用するために必要な機能を備えるものとする。入出力部240は、制御部210、記憶部230と協働して、学習済モデル提案モジュール241を実現する。入力を実現するための例として、タッチパネル機能を実現する液晶ディスプレイ、キーボード、マウス、ペンタブレット、装置上のハードウェアボタン、音声認識を行うためのマイク等を備えることが可能である。また、出力を実現するための例として、液晶ディスプレイ、PCのディスプレイ、プロジェクターへの投影等の表示と音声出力等の形態が考えられる。入出力方法により、本発明は特に機能を限定されるものではない。 Suppose that the input / output unit 240 has functions necessary to use the learned model proposal system. The input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230. As an example for realizing the input, a liquid crystal display that realizes a touch panel function, a keyboard, a mouse, a pen tablet, a hardware button on the apparatus, a microphone for performing voice recognition, and the like can be provided. Further, as an example for realizing the output, forms such as a liquid crystal display, a PC display, a projection on a projector, and an audio output can be considered. The function of the present invention is not particularly limited by the input / output method.
 [学習済モデル提案処理]
 図3は、学習済モデル提案処理のフローチャート図である。上述した各モジュールが実行する処理について、本処理にあわせて説明する。
[Learned model proposal process]
FIG. 3 is a flowchart of the learned model proposal process. Processing executed by each module described above will be described in accordance with this processing.
 まず、コンピュータ200の学習済モデルデータベース23に、複数の学習済モデルを記憶する(ステップS301)。学習済モデルは、他のコンピュータや記憶媒体から取得しても良いし、コンピュータ200で作成しても良い。また、このステップS301は、既に学習済モデルデータベース23に十分な学習済モデルが記憶されている場合には、省略可能である。 First, a plurality of learned models are stored in the learned model database 23 of the computer 200 (step S301). The learned model may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, this step S301 can be omitted when a sufficiently learned model is already stored in the learned model database 23.
 図9は、学習済モデルデータベースの構成の一例である。本発明において、学習済モデルとは、過去の画像と正解データとからなる所定の学習データ(教師データ)と、その所定の学習データで学習した学習済みの分類器や学習済みの畳み込みニューラルネットワーク等の機械学習の手法とを含むものとする。画像を特徴ベクトルへと変換するための変換方法が存在する場合には、その変換方法も機械学習の手法とあわせて、学習済モデルに含めるものとする。また、学習済モデルデータベース23には、それぞれの学習済モデルについて、画像解析の目的と、画像を撮影した環境とを関連付けて保存するものとする。ここで、画像解析の目的としては、例えば、不審者の進入を検知すること(不審者検知)、農作物の収穫時期を適切に検知すること(収穫物検知)、害虫の発生を検知すること(害虫検知)等が、考えられる。また、画像が撮影される環境については、場所、広さ、カメラ位置、照明、等が条件として考えられる。例えば、場所については、屋内か、屋外(街)か、屋外(農場)か、広さについては、何平方メートルか何haか、カメラ位置については、天井隅か、天井中央か、机・棚の上か、建物外部か、電柱か、ドローンか、照明については、蛍光灯か、LEDか、自然光か、なしか、ありか、等が選択肢として考えられる。これらの画像解析の目的と、画像を撮影した環境の情報を利用することで、目的と環境が同じ学習済モデルを提案することが可能となる。 FIG. 9 shows an example of the configuration of the learned model database. In the present invention, the learned model refers to predetermined learning data (teacher data) composed of past images and correct data, a learned classifier learned from the predetermined learning data, a learned convolutional neural network, and the like And machine learning techniques. When there is a conversion method for converting an image into a feature vector, the conversion method is also included in the learned model together with the machine learning method. In the learned model database 23, for each learned model, the purpose of image analysis and the environment in which the image was captured are stored in association with each other. Here, for the purpose of image analysis, for example, detecting the entry of a suspicious person (suspicious person detection), appropriately detecting the harvest time of crops (crop detection), and detecting the occurrence of pests ( Pest detection) is considered. In addition, regarding the environment in which an image is taken, conditions such as location, size, camera position, lighting, etc. can be considered. For example, for location, whether indoors, outdoors (city), outdoors (farm), for area, how many square meters or how ha, for camera position, in the corner of the ceiling, in the center of the ceiling, on the desk / shelf As for the top, the outside of the building, the utility pole, the drone, the lighting, it is possible to select whether it is a fluorescent lamp, an LED, natural light, or not, etc. By using the purpose of the image analysis and the information of the environment where the image is captured, it is possible to propose a learned model having the same purpose and environment.
 コンピュータ200で学習済モデルを作成する場合、過去の画像と正解データとからなる所定の学習データ(教師データ)を使用して、機械学習を行う。ここで用いる機械学習の手法は、画像解析に適したものであることが望ましい。機械学習の手法としては、畳み込みニューラルネットワーク(CNN)、パーセプトロン、再起型ニューラルネットワーク(RNN)、残差ネットワーク(ResNet)等のニューラルネットワークや、サポートベクターマシン(SVN)、単純ベイズ分類器等が挙げられる。また、画像を特徴ベクトルへと変換するための変換方法としては、例えば、Bug of Visual Words、HOG(Histgram of Oriented Gradients)、ORB、SURF等が挙げられる。また、学習済モデルを学習済モデルデータベース23に記憶させる際には、過去の画像と正解データとからなる所定の学習データと、その所定の学習データで学習した学習済みの分類器や学習済みの畳み込みニューラルネットワーク等の機械学習の手法とを含むものとする。画像を特徴ベクトルへと変換するための変換方法が存在する場合には、その変換方法も機械学習の手法とあわせて、学習済モデルに含めるものとする。ある学習データに対して複数の機械学習の手法を試し、もっとも画像解析の結果の良い手法のみを学習済モデルデータベース23に記憶させるものとしてもよい。また、コンピュータ200で学習済モデルを作成する場合には、学習のための時間が必要となるため、学習済モデル提案システムの稼働前等、十分学習のために時間とCPUをかけられるときに行うことが望ましい。 When creating a learned model by the computer 200, machine learning is performed using predetermined learning data (teacher data) composed of past images and correct answer data. The machine learning method used here is preferably suitable for image analysis. Machine learning techniques include convolutional neural networks (CNN), perceptrons, recurrent neural networks (RNN), residual networks (ResNet), and other neural networks, support vector machines (SVN), and naive Bayes classifiers. It is done. Examples of the conversion method for converting an image into a feature vector include Bug of Visual Words, HOG (Histgram of Oriented Gradients), ORB, SURF, and the like. Further, when the learned model is stored in the learned model database 23, predetermined learning data composed of past images and correct answer data, a learned classifier learned from the predetermined learning data, and a learned classifier And machine learning techniques such as convolutional neural networks. When there is a conversion method for converting an image into a feature vector, the conversion method is also included in the learned model together with the machine learning method. A plurality of machine learning methods may be tried for certain learning data, and only the method having the best image analysis result may be stored in the learned model database 23. In addition, when a learned model is created by the computer 200, it takes time for learning. Therefore, it is performed when sufficient time and CPU can be spent for learning, such as before operation of the learned model proposal system. It is desirable.
 図3に戻り、コンピュータ200の目的取得モジュール211は、何のために画像解析を行いたいのかという目的を取得するために、目的送信要求を送信する(ステップS302)。図3では、目的取得の方法として、カメラ100から目的を送信させてそれを取得する場合を図示している。目的送信要求を送信する先は、別の端末等(非図示)でもよい。また、目的送信要求を送信する代わりに、コンピュータ200の入出力部240に質問を提示してもよい。 Returning to FIG. 3, the purpose acquisition module 211 of the computer 200 transmits a purpose transmission request in order to acquire the purpose of what the image analysis is to be performed for (Step S302). FIG. 3 illustrates a case where the purpose is acquired by transmitting the purpose from the camera 100 as a method for acquiring the purpose. The destination for transmitting the target transmission request may be another terminal or the like (not shown). Further, instead of transmitting the purpose transmission request, a question may be presented to the input / output unit 240 of the computer 200.
 カメラ100は、コンピュータ200からの目的送信要求を受け、通信部120を介して目的を送信する(ステップS303)。目的送信要求を送信する先を別の端末等(非図示)とした場合には、別の端末(非図示)が目的をコンピュータ200に送信する。また、目的送信要求を送信する代わりに、コンピュータ200の入出力部240に質問を提示した場合には、ユーザが質問に対する入力を確定したタイミングが、目的送信に相当する。 The camera 100 receives the purpose transmission request from the computer 200 and transmits the purpose through the communication unit 120 (step S303). When the destination to which the purpose transmission request is transmitted is another terminal or the like (not shown), the other terminal (not shown) transmits the purpose to the computer 200. In addition, when a question is presented to the input / output unit 240 of the computer 200 instead of transmitting the purpose transmission request, the timing when the user confirms the input to the question corresponds to the purpose transmission.
 コンピュータ200の目的取得モジュール211は、目的を取得する(ステップS304)。取得先は、ステップS302の目的送信要求先にあわせて、カメラ100でもよいし、別の端末等(非図示)でもよいし、コンピュータ200の入出力部240でもよい。 The purpose acquisition module 211 of the computer 200 acquires the purpose (step S304). The acquisition source may be the camera 100, another terminal (not shown), or the input / output unit 240 of the computer 200 in accordance with the purpose transmission request destination in step S <b> 302.
 次に、コンピュータ200の環境取得モジュール212は、画像解析を行いたい画像を撮影する環境を取得するために、環境送信要求を送信する(ステップS305)。図3では、環境取得の方法として、カメラ100から環境を送信させてそれを取得する場合を図示している。目的送信要求を送信する先は、別の端末等(非図示)でもよい。また、環境送信要求を送信する代わりに、コンピュータ200の入出力部240に質問を提示してもよい。 Next, the environment acquisition module 212 of the computer 200 transmits an environment transmission request in order to acquire an environment for capturing an image to be subjected to image analysis (step S305). FIG. 3 illustrates a case where an environment is transmitted from the camera 100 and acquired as an environment acquisition method. The destination for transmitting the target transmission request may be another terminal or the like (not shown). Further, instead of transmitting the environment transmission request, a question may be presented to the input / output unit 240 of the computer 200.
 カメラ100は、コンピュータ200からの環境送信要求を受け、通信部120を介して環境を送信する(ステップS306)。ここで、カメラ100がステレオカメラである場合には、被写体までの距離を解析することで広さ等を割り出し、環境として送信してもよい。また、カメラ100に光度センサ等の特別なセンサを備える場合には、センサの値から場所や照明等を割り出し、環境として送信してもよい。環境送信要求を送信する先を別の端末等(非図示)とした場合には、別の端末(非図示)が環境をコンピュータ200に送信する。また、環境送信要求を送信する代わりに、コンピュータ200の入出力部240に質問を提示した場合には、ユーザが質問に対する入力を確定したタイミングが、環境送信に相当する。 The camera 100 receives the environment transmission request from the computer 200 and transmits the environment via the communication unit 120 (step S306). Here, when the camera 100 is a stereo camera, the size or the like may be determined by analyzing the distance to the subject and transmitted as an environment. In addition, when the camera 100 includes a special sensor such as a light intensity sensor, the location, illumination, and the like may be determined from the sensor value and transmitted as an environment. When the destination of sending the environment transmission request is another terminal or the like (not shown), the other terminal (not shown) transmits the environment to the computer 200. Further, when a question is presented to the input / output unit 240 of the computer 200 instead of transmitting the environment transmission request, the timing when the user confirms the input to the question corresponds to environment transmission.
 次に、コンピュータ200の環境取得モジュール212は、環境を取得する(ステップS307)。取得先は、ステップS305の環境送信要求先にあわせて、カメラ100でもよいし、別の端末等(非図示)でもよいし、コンピュータ200の入出力部240でもよい。 Next, the environment acquisition module 212 of the computer 200 acquires the environment (step S307). The acquisition destination may be the camera 100, another terminal (not shown), or the input / output unit 240 of the computer 200, in accordance with the environment transmission request destination in step S305.
 ここで環境取得モジュール212が取得する環境は、場所、広さ、カメラ位置、照明、等を直接取得してもよいし、例えば、カメラに光度センサを備える場合には、それを利用して場所が室内か屋外かを判断してもよいし、カメラがステレオカメラである場合には、それを利用して被写体までの距離や広さを判断してもよい。 Here, the environment acquired by the environment acquisition module 212 may directly acquire the location, size, camera position, illumination, etc. For example, when the camera is equipped with a light intensity sensor, the location is obtained by using it. It may be determined whether the camera is indoors or outdoors, and when the camera is a stereo camera, the distance and area to the subject may be determined using the camera.
 図6は、画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面の一例である。この画面は、コンピュータ200の入出力部240に表示してもよいし、ユーザの使用する別の端末(非図示)の入出力部に表示してもよい。ユーザに対して、画像解析の目的と、画像の撮影環境についての設問を表示し、ユーザに選択又は入力させることで、目的と環境を特定する。図6の例では、画像解析の目的として、不審者検知、収穫物検知、害虫検知を提示して、不審者検知が選択された例を示している。また、画像の撮影環境として、場所については、屋内、屋外(街)、屋外(農場)を提示して、屋内が選択された例を、広さについては、20平方メートルと入力した例を、カメラ位置については、天井隅、天井中央、机・棚の上を提示して、天井中央が選択された例を、照明については、蛍光灯、LED、自然光を提示して、LEDが選択された例を示している。ここで、検索ボタン601を選択することで、質問に対する入力を確定とし、目的取得モジュール211と環境取得モジュール212が、取得を完了するものとする。ここでは、画像解析の目的と画像の撮影環境とについての質問提示を一画面で行う場合の例を示したが、それぞれ別の画面としてもよい。 FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment. This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user. The purpose of the image analysis and the question about the shooting environment of the image are displayed to the user, and the user is selected or input to specify the purpose and environment. The example of FIG. 6 shows an example in which suspicious person detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis. In addition, as an image capturing environment, indoors, outdoors (city), and outdoors (farm) are presented for places, and indoors are selected. For example, 20 square meters is entered for the area. For the position, the ceiling corner, the center of the ceiling, and the desk / shelf are presented and the center of the ceiling is selected. For the lighting, the fluorescent lamp, LED, and natural light are presented and the LED is selected. Is shown. Here, by selecting the search button 601, the input to the question is confirmed, and the purpose acquisition module 211 and the environment acquisition module 212 complete the acquisition. Here, an example has been shown in which questions about the purpose of image analysis and the shooting environment of an image are presented on one screen, but different screens may be used.
 また、図7は、画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面の別の一例である。図7の例では、画像解析の目的として、不審者検知、収穫物検知、害虫検知を提示して、収穫物検知が選択された例を示している。また、画像の撮影環境として、場所については、屋内、屋外(街)、屋外(農場)を提示して、屋外(農場)が選択された例を、広さについては、5haと入力した例を、カメラ位置については、建物外部、電柱、ドローンを提示して、電柱が選択された例を、照明については、なし、ありを提示して、なしが選択された例を示している。図7では、図6の例と、画像解析の目的が異なるため、画像の撮影環境として提示する選択肢を変更した例を示している。このように、画像解析の目的や、場所等、選択済みの項目に合わせて、撮影環境の選択肢を変化させることで、より目的や場所にあわせた入力を容易に行うことが可能となる。 FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment. The example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis. In addition, as an image capturing environment, for example, indoors, outdoors (city), and outdoors (farm) are presented as places, and the outdoors (farm) is selected. As for the camera position, the outside of the building, the utility pole, and the drone are presented to show the example where the utility pole is selected, and the lighting is shown as none, yes, and no is selected. FIG. 7 shows an example in which the options presented as the image capturing environment are changed because the purpose of image analysis is different from the example of FIG. As described above, by changing the shooting environment options in accordance with the selected items such as the purpose of image analysis and the location, it is possible to easily perform input according to the purpose and location.
 図3に戻り、最後にコンピュータ200の学習済モデル提案モジュール241は、学習済モデルデータベース23を参照し、ステップS304で取得した目的と、ステップS307で取得した環境が、どの学習済モデルの目的および環境と適合するかを検索して、適切な学習済モデルを提案する(ステップS308)。ここでの学習済モデルの提案は、コンピュータ200の入出力部240に出力してもよいし、ユーザが使用する別の端末等(非図示)の入出力部に出力してもよい。 Returning to FIG. 3, finally, the learned model proposal module 241 of the computer 200 refers to the learned model database 23 to determine which purpose of the learned model is the purpose acquired in step S304 and the environment acquired in step S307. It searches for suitability with the environment and proposes an appropriate learned model (step S308). The proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
 図8は、学習済モデル提案の画面の一例である。この画面は、コンピュータ200の入出力部240に表示してもよいし、ユーザの使用する別の端末(非図示)の入出力部に表示してもよい。図6の例の検索結果として、目的が不審者検知、環境が屋内、広さ約20平方メートル、カメラ位置天井中央、照明LEDという条件で検索を行った場合に、図9の学習済モデルデータベース23を検索して、「Bunruiki002」を適合する学習済モデルとして提案するという例を示している。提案画面で、図8のリンク801に示すように、ダウンロードURLを示して、選択することで提案した学習済モデルをすぐにダウンロード可能としてもよい。また、終了ボタン802を選択することで、学習済モデル提案システムの終了を行ってよい。また、検索画面へボタン801を選択することで、図6や図7に示した画像解析の目的と画像の撮影環境を取得するための、質問提示と入力の画面に戻ってよい。また、適合する学習済モデルが見つからない場合には、目的が合致し、環境が近い学習済モデルを、提案又は参考として出力してもよい。ユーザは、提案された学習済モデルを利用することで、学習時間をかけずに、目的にあわせた精度の良い画像解析結果を得ることが可能となる。 FIG. 8 is an example of a screen for learning model proposal. This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user. As a search result in the example of FIG. 6, when a search is performed on the condition that the purpose is suspicious person detection, the environment is indoors, the area is approximately 20 square meters, the center of the camera position ceiling, and the lighting LED, the learned model database 23 of FIG. And “Bunruki002” is proposed as a suitable learned model. As shown by a link 801 in FIG. 8 on the proposal screen, a download URL may be indicated and the proposed learned model may be immediately downloaded by selecting it. Moreover, the learned model suggestion system may be terminated by selecting the end button 802. Further, by selecting the button 801 to the search screen, the screen may return to the question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment shown in FIGS. In addition, when a suitable learned model is not found, a learned model whose purpose matches and whose environment is close may be output as a proposal or reference. By using the proposed learned model, the user can obtain an accurate image analysis result suitable for the purpose without spending learning time.
 本発明によれば、画像の解析の目的と画像が撮影される環境とを取得して利用することで、それに似た既存の学習済モデルを提案し、学習時間をかけずに、精度の良い画像解析結果を得ることが可能な学習済モデル提案システム、学習済モデル提案方法、およびプログラムを提供することが可能となる。 According to the present invention, by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
 [画像比較処理]
 図4は、画像比較を行う場合の、カメラ100とコンピュータ200の機能ブロックと各機能の関係を示す図である。図2の構成に加え、コンピュータ200の制御部210は、通信部220、記憶部230と協働して画像取得モジュール213を実現する。また、制御部210は記憶部230と協働して画像比較モジュール214を実現する。図5は、画像比較を行う場合の、学習済モデル提案処理のフローチャート図である。上述した各モジュールが実行する処理について、本処理にあわせて説明する。図5のステップS501からステップS507の処理は、図3のステップS301からステップS307の処理に相当するため、ステップS508以降について説明する。なお、ステップS501の処理は、ステップS301と同じく、既に学習済モデルデータベース23に十分な学習済モデルが記憶されている場合には、省略可能であるものとする。
[Image comparison processing]
FIG. 4 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions when performing image comparison. In addition to the configuration of FIG. 2, the control unit 210 of the computer 200 implements the image acquisition module 213 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements the image comparison module 214 in cooperation with the storage unit 230. FIG. 5 is a flowchart of learned model proposal processing when image comparison is performed. Processing executed by each module described above will be described in accordance with this processing. Since the processing from step S501 to step S507 in FIG. 5 corresponds to the processing from step S301 to step S307 in FIG. 3, only step S508 and subsequent steps will be described. Note that the processing in step S501 can be omitted when a sufficiently learned model is already stored in the learned model database 23, as in step S301.
 コンピュータ200の画像取得モジュール213は、画像解析を行いたい画像を取得するために、カメラ100に画像送信要求を送信する(ステップS508)。 The image acquisition module 213 of the computer 200 transmits an image transmission request to the camera 100 in order to acquire an image to be subjected to image analysis (step S508).
 カメラ100は、コンピュータ200からの画像送信要求を受け、通信部120を介して画像を送信する(ステップS509)。カメラ100は、リアルタイムに撮像している画像の送信を行うだけでなく、カメラ100が過去に撮像して記憶部130に保存しておいた画像を送信しても良い。 The camera 100 receives an image transmission request from the computer 200 and transmits an image via the communication unit 120 (step S509). The camera 100 may not only transmit an image captured in real time but also transmit an image captured by the camera 100 in the past and stored in the storage unit 130.
 コンピュータ200の画像取得モジュール213は、カメラ100から画像解析を行いたい画像を取得する(ステップS510)。 The image acquisition module 213 of the computer 200 acquires an image to be analyzed from the camera 100 (step S510).
 ここで、フローチャートには記載していないが、例えばステップS507で取得した環境の情報が不足している場合には、環境取得モジュール212がステップS510で取得した画像を解析して、場所、広さ、カメラ位置、照明、等を判断してもよいものとする。 Here, although not described in the flowchart, for example, when the environment information acquired in step S507 is insufficient, the environment acquisition module 212 analyzes the image acquired in step S510, and analyzes the location, area, and size. The camera position, lighting, etc. may be determined.
 次に、コンピュータ200の画像比較モジュール214は、ステップS510で取得した画像解析を行いたい画像と、学習済モデルデータベース23に記憶されている学習済モデルの所定の学習データ(教師データ)の画像とを、比較する(ステップS511)。ここで、比較のために使用する学習済モデルの所定の学習データの画像は、一つの学習済モデルの画像全てを使用するのではなく、一枚又は複数枚をピックアップして使用してよい。この比較により、画像解析を行いたい画像が、学習済モデルの所定の学習データの画像に類似である場合に、その学習済モデルを使用して画像解析を行った場合の精度がよくなると考え、提案するものとする。また、この比較作業は、必ず行って目的と環境が適合する学習済モデルの画像と類似であることを確認してもよいし、目的と環境が適合する学習済モデルが複数ある場合にどの学習済モデルを提案するか絞り込むために行ってもよいし、目的と環境が適合する学習済モデルが無い場合に目的のみが適合する学習済モデルから提案する学習済モデルを選択するために行ってもよい。適合する学習済モデルが見つからない場合には、適合する学習済モデルなしとしてもよいし、目的が合致し最も環境が近い学習済モデルを提案してもよい。 Next, the image comparison module 214 of the computer 200 obtains the image to be analyzed in step S510 and the image of the predetermined learning data (teacher data) of the learned model stored in the learned model database 23. Are compared (step S511). Here, as the image of the predetermined learning data of the learned model used for comparison, not all the images of one learned model may be used, but one or a plurality of images may be picked up and used. By this comparison, when the image to be analyzed is similar to the image of the predetermined learning data of the learned model, the accuracy when performing the image analysis using the learned model is improved, Shall be proposed. In addition, this comparison work may be performed to confirm that the image is similar to the image of the learned model that matches the purpose and the environment, and when there are multiple learned models that match the purpose and the environment, This may be done to suggest or refine a completed model, or it may be used to select a learned model to be proposed from a learned model that matches only the purpose when there is no learned model that matches the purpose and environment. Good. When a suitable learned model is not found, there may be no suitable learned model, or a learned model that matches the purpose and has the closest environment may be proposed.
 最後にコンピュータ200の学習済モデル提案モジュール241は、ステップS511の画像比較結果に基づき、適切な学習済モデルを提案する(ステップS512)。ここでの学習済モデルの提案は、コンピュータ200の入出力部240に出力してもよいし、ユーザが使用する別の端末等(非図示)の入出力部に出力してもよい。図8は、前述の通り、学習済モデル提案の画面の一例である。ユーザは、提案された学習済モデルを利用することで、学習時間をかけずに、目的にあわせた精度の良い画像解析結果を得ることが可能となる。 Finally, the learned model proposal module 241 of the computer 200 proposes an appropriate learned model based on the image comparison result in step S511 (step S512). The proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user. FIG. 8 shows an example of the learned model proposal screen as described above. By using the proposed learned model, the user can obtain an accurate image analysis result suitable for the purpose without spending learning time.
 本発明によれば、画像の解析の目的と画像が撮影される環境とに加え、更に、画像解析を行いたい画像とを取得して利用することで、それに似た既存の学習済モデルを提案し、学習時間をかけずに、精度の良い画像解析結果を得ることが可能な学習済モデル提案システム、学習済モデル提案方法、およびプログラムを提供することが可能となる。 According to the present invention, in addition to the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed by acquiring and using the image to be analyzed. In addition, it is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an accurate image analysis result without taking a learning time.
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、コンピュータからネットワーク経由で提供される(SaaS:ソフトウェア・アズ・ア・サービス)形態であってもよいし、フレキシブルディスク、CD(CD-ROM等)、DVD(DVD-ROM、DVD-RAM等)、コンパクトメモリ等のコンピュータ読取可能な記録媒体に記録された形態で提供される。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置又は外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program. The program may be, for example, in the form (SaaS: Software as a Service) provided from a computer via a network, or a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD). -RAM, etc.) and a computer-readable recording medium such as a compact memory. In this case, the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it. The program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。 As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. The effects described in the embodiments of the present invention are only the most preferable effects resulting from the present invention, and the effects of the present invention are limited to those described in the embodiments of the present invention. is not.
100 カメラ、200 コンピュータ、300 通信網 100 cameras, 200 computers, 300 communication networks

Claims (9)

  1.  画像解析に適切な学習済モデルを提案するシステムであって、
     画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースと、
     画像解析の目的を取得する目的取得手段と、
     当該目的のための画像が撮影される環境を取得する環境取得手段と、
     前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案する学習済モデル提案手段と、
     を備えることを特徴とする学習済モデル提案システム。
    A system that proposes a trained model suitable for image analysis,
    A learned model database that stores a learned model for image analysis in association with a purpose and an environment;
    Purpose acquisition means for acquiring the purpose of image analysis;
    An environment acquisition means for acquiring an environment in which an image for the purpose is captured;
    With reference to the learned model database, a learned model proposing means for proposing a learned model suitable for the purpose and the environment;
    A learned model suggestion system characterized by comprising:
  2.  前記学習済モデルは、過去の画像と正解データとからなる所定の学習データで学習した学習済みの分類器を含み、
     前記学習済モデル提案手段は、前記学習済みの分類器を学習済モデルとして提案することを特徴とする請求項1に記載の学習済モデル提案システム。
    The learned model includes a learned classifier trained with predetermined learning data consisting of past images and correct answer data,
    The learned model proposing system according to claim 1, wherein the learned model proposing means proposes the learned classifier as a learned model.
  3.  前記学習済モデルは、画像を分類器で分類する場合の分類器の種類と、画像を特徴ベクトルへ変換する変換方法と、からなることを特徴とする請求項2に記載の学習済モデル提案システム。 The learned model suggestion system according to claim 2, wherein the learned model includes a type of classifier when an image is classified by a classifier and a conversion method for converting the image into a feature vector. .
  4.  前記学習済モデルが、過去の画像と正解データとからなる所定の学習データで学習した学習済みの畳み込みニューラルネットワークであることを特徴とする請求項1に記載の学習済モデル提案システム。 The learned model suggestion system according to claim 1, wherein the learned model is a learned convolutional neural network learned with predetermined learning data including past images and correct answer data.
  5.  前記画像解析を行いたい環境の画像を取得する画像取得手段と、
     取得した前記画像と、前記所定の学習データの画像とが類似か否かを決定する画像比較手段と、を備え、
     前記画像が類似している場合に、前記学習済モデル提案手段が、前記学習済モデルを提案することを特徴とする請求項2から請求項4のいずれか一項に記載の学習済モデル提案システム。
    Image acquisition means for acquiring an image of an environment to be subjected to the image analysis;
    Image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar,
    The learned model proposal system according to any one of claims 2 to 4, wherein, when the images are similar, the learned model proposal unit proposes the learned model. .
  6.  前記環境取得手段は、提示した質問に対して入力された回答を環境に関するデータとして取得することを特徴とする請求項1から請求項5のいずれか一項に記載の学習済モデル提案システム。 The learned model suggestion system according to any one of claims 1 to 5, wherein the environment acquisition unit acquires an answer input to the presented question as data related to the environment.
  7.  前記環境取得手段は、センサ又はカメラで検知したデータを取得することを特徴とする請求項1から請求項5のいずれか一項に記載の学習済モデル提案システム。 The learned model suggestion system according to any one of claims 1 to 5, wherein the environment acquisition unit acquires data detected by a sensor or a camera.
  8.  画像解析に適切な学習済モデルを提案するシステムに、
     画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースと、
     画像解析の目的を取得するステップと、
     当該目的のための画像が撮影される環境を取得するステップと、
     前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案するステップと、
     を備えることを特徴とする学習済モデル提案方法。
    In a system that proposes a trained model suitable for image analysis,
    A learned model database that stores a learned model for image analysis in association with a purpose and an environment;
    Obtaining the purpose of image analysis;
    Obtaining an environment in which an image for that purpose is taken;
    Referring to the learned model database and proposing a learned model suitable for the purpose and the environment;
    A learned model proposing method characterized by comprising:
  9.  画像解析を行うための学習済モデルを目的と環境に対応付けて記憶する学習済モデルデータベースを備える学習済モデル提案システムに、
     画像解析の目的を取得するステップ、
     当該目的のための画像が撮影される環境を取得するステップ、
     前記学習済モデルデータベースを参照して、前記目的と前記環境に適合した学習済モデルを提案するステップ、
     を実行させるためのプログラム。
    In a learned model proposal system having a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment,
    Obtaining the purpose of image analysis;
    Obtaining an environment in which an image for that purpose is taken;
    Referring to the learned model database and proposing a learned model suitable for the purpose and the environment;
    A program for running
PCT/JP2018/020288 2018-05-28 2018-05-28 Trained model suggestion system, trained model suggestion method, and program WO2019229789A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020521646A JP7068745B2 (en) 2018-05-28 2018-05-28 Trained model proposal system, trained model proposal method, and program
PCT/JP2018/020288 WO2019229789A1 (en) 2018-05-28 2018-05-28 Trained model suggestion system, trained model suggestion method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/020288 WO2019229789A1 (en) 2018-05-28 2018-05-28 Trained model suggestion system, trained model suggestion method, and program

Publications (1)

Publication Number Publication Date
WO2019229789A1 true WO2019229789A1 (en) 2019-12-05

Family

ID=68697263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/020288 WO2019229789A1 (en) 2018-05-28 2018-05-28 Trained model suggestion system, trained model suggestion method, and program

Country Status (2)

Country Link
JP (1) JP7068745B2 (en)
WO (1) WO2019229789A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626950A (en) * 2020-05-19 2020-09-04 上海集成电路研发中心有限公司 Online training device and method for image denoising model
WO2021261140A1 (en) * 2020-06-22 2021-12-30 株式会社片岡製作所 Cell treatment device, learning device, and learned model proposal device
US20220100987A1 (en) * 2020-09-28 2022-03-31 Yokogawa Electric Corporation Monitoring device, learning apparatus, method and storage medium
WO2022064631A1 (en) * 2020-09-25 2022-03-31 日本電気株式会社 Image analysis system and image analysis method
JP7305850B1 (en) 2022-06-30 2023-07-10 菱洋エレクトロ株式会社 System, terminal, server, method and program using machine learning

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240029126A (en) * 2022-08-26 2024-03-05 한국전자기술연구원 System for generating deep learning model optimized for installation environment and method for configuring training data thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017159614A1 (en) * 2016-03-14 2017-09-21 オムロン株式会社 Learning service provision device
WO2018078862A1 (en) * 2016-10-31 2018-05-03 株式会社オプティム Image analysis system, image analysis method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017159614A1 (en) * 2016-03-14 2017-09-21 オムロン株式会社 Learning service provision device
WO2018078862A1 (en) * 2016-10-31 2018-05-03 株式会社オプティム Image analysis system, image analysis method, and program

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626950A (en) * 2020-05-19 2020-09-04 上海集成电路研发中心有限公司 Online training device and method for image denoising model
WO2021261140A1 (en) * 2020-06-22 2021-12-30 株式会社片岡製作所 Cell treatment device, learning device, and learned model proposal device
WO2022064631A1 (en) * 2020-09-25 2022-03-31 日本電気株式会社 Image analysis system and image analysis method
US20220100987A1 (en) * 2020-09-28 2022-03-31 Yokogawa Electric Corporation Monitoring device, learning apparatus, method and storage medium
JP2022055229A (en) * 2020-09-28 2022-04-07 横河電機株式会社 Monitoring device, learning apparatus, method and program
US11881048B2 (en) 2020-09-28 2024-01-23 Yokogawa Electric Corporation Monitoring device, learning apparatus, method and storage medium
JP7305850B1 (en) 2022-06-30 2023-07-10 菱洋エレクトロ株式会社 System, terminal, server, method and program using machine learning
JP7398587B1 (en) 2022-06-30 2023-12-14 菱洋エレクトロ株式会社 Systems, terminals, servers, methods, and programs using machine learning
JP2024005989A (en) * 2022-06-30 2024-01-17 菱洋エレクトロ株式会社 System using machine learning, terminal, server, method and program

Also Published As

Publication number Publication date
JPWO2019229789A1 (en) 2021-06-24
JP7068745B2 (en) 2022-05-17

Similar Documents

Publication Publication Date Title
WO2019229789A1 (en) Trained model suggestion system, trained model suggestion method, and program
CN109635621B (en) System and method for recognizing gestures based on deep learning in first-person perspective
CN106255866B (en) Communication system, control method and storage medium
US10847186B1 (en) Video tagging by correlating visual features to sound tags
WO2019156332A1 (en) Device for producing artificial intelligence character for augmented reality and service system using same
CN111339246A (en) Query statement template generation method, device, equipment and medium
KR20200076169A (en) Electronic device for recommending a play content and operating method thereof
KR101847200B1 (en) Method and system for controlling an object
CN111492374A (en) Image recognition system
US11030479B2 (en) Mapping visual tags to sound tags using text similarity
US20190012347A1 (en) Information processing device, method of processing information, and method of providing information
US20200112838A1 (en) Mobile device that creates a communication group based on the mobile device identifying people currently located at a particular location
JP2010224715A (en) Image display system, digital photo-frame, information processing system, program, and information storage medium
US20190266906A1 (en) Method and System for Implementing AI-Powered Augmented Reality Learning Devices
KR102646344B1 (en) Electronic device for image synthetic and operating thereof
US10743061B2 (en) Display apparatus and control method thereof
KR20200013164A (en) Electronic apparatus and controlling method thereof
US9992407B2 (en) Image context based camera configuration
US10965915B2 (en) Collection system, program for terminal, and collection method
CN106412469B (en) The projecting method of optical projection system, projection arrangement and optical projection system
JP6267840B1 (en) Computer system, edge device control method and program
JP2020042528A (en) Object identification system, model learning system, object identification method, model learning method, and program
US11677836B2 (en) Server apparatus, communication system and communication method
KR20190046364A (en) Method and system for providing experiential learning service using on line
JP6857537B2 (en) Information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921005

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2020521646

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921005

Country of ref document: EP

Kind code of ref document: A1