WO2019229789A1 - Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme - Google Patents

Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme Download PDF

Info

Publication number
WO2019229789A1
WO2019229789A1 PCT/JP2018/020288 JP2018020288W WO2019229789A1 WO 2019229789 A1 WO2019229789 A1 WO 2019229789A1 JP 2018020288 W JP2018020288 W JP 2018020288W WO 2019229789 A1 WO2019229789 A1 WO 2019229789A1
Authority
WO
WIPO (PCT)
Prior art keywords
learned model
environment
image
learned
image analysis
Prior art date
Application number
PCT/JP2018/020288
Other languages
English (en)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2018/020288 priority Critical patent/WO2019229789A1/fr
Priority to JP2020521646A priority patent/JP7068745B2/ja
Publication of WO2019229789A1 publication Critical patent/WO2019229789A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a learned model proposal system, a learned model, which can propose an existing learned model similar to that by acquiring and using the purpose of image analysis and the environment in which the image is captured.
  • the present invention relates to a proposed method and a program.
  • Patent Document 1 There has been proposed a method of providing a mechanism for discriminating who is captured by performing image analysis processing on a person image and automatically categorizing the person image.
  • Patent Literature As a machine learning technique for artificial intelligence to perform image analysis, supervised learning is a well-known technique, and a method for generating a learned model suitable for the purpose has also been proposed (Patent Literature). 2).
  • the present inventor uses an existing learned model and has high accuracy without spending learning time if the purpose of image analysis and the environment in which the image is captured are compatible. We focused on the possibility of obtaining image analysis results.
  • the present invention proposes an existing learned model similar to that by acquiring and using the purpose of image analysis and the environment in which the image is captured, and provides accurate image analysis without taking up learning time. It is an object of the present invention to provide a learned model proposal system, a learned model proposal method, and a program capable of obtaining results.
  • the present invention provides the following solutions.
  • the invention according to the first feature is A system that proposes a trained model suitable for image analysis, A learned model database that stores a learned model for image analysis in association with a purpose and an environment; Purpose acquisition means for acquiring the purpose of image analysis; An environment acquisition means for acquiring an environment in which an image for the purpose is captured; With reference to the learned model database, a learned model proposing means for proposing a learned model suitable for the purpose and the environment; Provided is a learned model proposal system characterized by comprising:
  • a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment;
  • a purpose acquisition means for acquiring the purpose of image analysis, an environment acquisition means for acquiring an environment in which an image for the purpose is photographed, and learning adapted to the purpose and the environment with reference to the learned model database Learned model proposing means for proposing a completed model.
  • the invention according to the first feature is a category of the learned model suggestion system, but the learned model suggestion method and program have the same actions and effects.
  • the invention according to the second feature is a learned model proposal system that is the invention according to the first feature,
  • the learned model includes a learned classifier trained with predetermined learning data consisting of past images and correct answer data,
  • the learned model proposal means provides a learned model proposal system that proposes the learned classifier as a learned model.
  • the learned model is learned by learning with predetermined learning data including past images and correct data.
  • the learned model proposing means proposes the learned classifier as a learned model.
  • the invention according to the third feature is a learned model proposal system which is the invention according to the second feature,
  • the learned model is provided with a learned model suggestion system comprising: a classifier type when an image is classified by a classifier; and a conversion method for converting the image into a feature vector.
  • the learned model includes a classifier type when an image is classified by a classifier, and an image.
  • the invention according to the fourth feature is a learned model proposal system which is the invention according to the first feature, Provided is a learned model suggestion system, wherein the learned model is a learned convolutional neural network learned with predetermined learning data including past images and correct answer data.
  • the learned model learns with predetermined learning data composed of past images and correct data.
  • This is a convolutional neural network.
  • the invention according to the fifth feature is a learned model proposing system that is an invention according to any one of the second feature to the fourth feature, Image acquisition means for acquiring an image of an environment to be subjected to the image analysis; Image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar, When the images are similar, the learned model proposing means proposes the learned model proposing system in which the learned model is proposed.
  • the learned model proposing system that is the invention of any one of the second feature to the fourth feature, an image acquisition unit that acquires an image of the environment to be subjected to the image analysis And the image comparison means for determining whether or not the acquired image and the image of the predetermined learning data are similar, and when the images are similar, the learned model suggestion means, Propose the learned model.
  • An invention according to a sixth feature is a learned model proposal system that is an invention according to any one of the first feature to the fifth feature,
  • the environment acquisition means provides a learned model proposal system characterized in that an answer input to a presented question is acquired as data relating to the environment.
  • the environment acquisition means is input to the presented question Obtain answers as environmental data.
  • An invention according to a seventh feature is a learned model proposal system that is an invention according to any one of the first feature to the fifth feature,
  • the environment acquisition means provides a learned model proposal system characterized by acquiring data detected by a sensor or a camera.
  • the environment acquisition unit acquires data detected by a sensor or a camera To do.
  • the invention according to the eighth feature is in a system that proposes a trained model suitable for image analysis, A learned model database that stores a learned model for image analysis in association with a purpose and an environment; Obtaining the purpose of image analysis; Obtaining an environment in which an image for that purpose is taken; Referring to the learned model database and proposing a learned model suitable for the purpose and the environment; Provided is a learned model proposal method comprising:
  • the invention according to the ninth feature is In a learned model proposal system having a learned model database that stores a learned model for performing image analysis in association with a purpose and an environment, Obtaining the purpose of image analysis; Obtaining an environment in which an image for that purpose is taken; Referring to the learned model database and proposing a learned model suitable for the purpose and the environment; Provide a program to execute.
  • the present invention by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions.
  • FIG. 3 is a flowchart of the learned model proposal process.
  • FIG. 4 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions when performing image comparison.
  • FIG. 5 is a flowchart of learned model proposal processing when image comparison is performed.
  • FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • FIG. 8 is an example of a learned model proposal screen.
  • FIG. 9 is an example of the configuration of the learned model database.
  • FIG. 1 is a schematic diagram of a preferred embodiment of the present invention. The outline of the present invention will be described with reference to FIG.
  • the learned model proposal system includes a camera 100, a computer 200, and a communication network 300.
  • the number of cameras 100 is not limited to one and may be plural.
  • the computer 200 is not limited to a real device, and may be a virtual device.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240, as shown in FIG.
  • the storage unit 230 includes a learned model database 23.
  • the control unit 210 implements the purpose acquisition module 211 and the environment acquisition module 212 in cooperation with the communication unit 220, the storage unit 230, and the input / output unit 240.
  • the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device including an imaging device such as an imaging element and a lens that can perform data communication with the computer 200, and captures an image to be analyzed.
  • a WEB camera is illustrated as an example, but an imaging apparatus having necessary functions such as a digital camera, a digital video, a camera installed in a drone, a wearable device camera, a security camera, an in-vehicle camera, and a 360-degree camera. It may be.
  • the captured image may be stored in the storage unit 130.
  • the camera 100 may be a stereo camera, in which case the distance to the subject group can be measured.
  • the camera 100 may be provided with a light intensity sensor, and in that case, the ambient light intensity can be measured.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
  • step S01 a plurality of learned models are stored in the learned model database 23 of the computer 200 (step S01).
  • the learned model may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, this step S01 can be omitted when a sufficiently learned model is already stored in the learned model database 23.
  • FIG. 9 shows an example of the configuration of the learned model database.
  • the learned model refers to predetermined learning data (teacher data) composed of past images and correct data, a learned classifier learned from the predetermined learning data, and a learned convolutional neural network ( And machine learning methods such as Convolutional (Neural Network: CNN).
  • a conversion method for converting an image into a feature vector the conversion method is also included in the learned model together with the machine learning method.
  • the learned model database 23 for each learned model, the purpose of image analysis and the environment in which the image was captured are stored in association with each other.
  • detecting the entry of a suspicious person for example, detecting the entry of a suspicious person (suspicious person detection), appropriately detecting the harvest time of crops (crop detection), and detecting the occurrence of pests ( Pest detection) is considered.
  • conditions such as location, size, camera position, lighting, etc. can be considered. For example, for location, whether indoors, outdoors (city), outdoors (farm), for area, how many square meters or how ha, for camera position, in the corner of the ceiling, in the center of the ceiling, on the desk / shelf
  • the lighting it is possible to select whether it is a fluorescent lamp, an LED, natural light, or not, etc.
  • the purpose acquisition module 211 of the computer 200 acquires the purpose of what the image analysis is desired to be performed for (Step S02).
  • the purpose may be transmitted from the camera 100 and acquired, or the user may input using the input / output unit 240 of the computer 200. Or may be obtained by causing the user to input via another terminal (not shown).
  • the environment acquisition module 212 of the computer 200 acquires an environment for capturing an image to be subjected to image analysis (step S03).
  • the environment may be transmitted from the camera 100 as illustrated in FIG. 1 to acquire the environment, or the user may input using the input / output unit 240 of the computer 200. Or may be obtained by causing the user to input via another terminal (not shown).
  • FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the purpose of the image analysis and the question about the shooting environment of the image are displayed to the user, and the user is selected or input to specify the purpose and environment.
  • the example of FIG. 6 shows an example in which suspicious person detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an indoor, outdoor (city), outdoor (farm) is presented for a place, an indoor is selected, and an area is input as 20 square meters.
  • the ceiling corner, the center of the ceiling, the top of the desk / shelf were presented, and the center of the ceiling was selected.
  • the fluorescent lamp, LED, and natural light were presented and the LED was selected.
  • An example is shown.
  • the search button 601 the input to the question is confirmed, and the purpose acquisition module 211 and the environment acquisition module 212 complete the acquisition.
  • questions about the purpose of image analysis and the shooting environment of an image are presented on one screen, but different screens may be used.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • the example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment for example, indoor, outdoor (city), and outdoor (farm) are presented for the location, and the outdoor (farm) is selected.
  • 5ha is input for the area.
  • the camera position an example in which the outside of the building, the utility pole, and the drone are presented and the utility pole is selected is shown. For the lighting, none and yes are presented, and no is selected.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • the example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment for example, indoor, outdoor (city), and outdoor (farm) are presented
  • FIG. 7 shows an example in which the options presented as the image capturing environment are changed and displayed because the purpose of image analysis is different from the example of FIG. In this way, by changing the shooting environment options in accordance with the selected items such as the purpose of image analysis and the location, the user can more easily input according to the purpose and location. .
  • the learned model proposal module 241 of the computer 200 refers to the learned model database 23 to determine which purpose of the learned model is the purpose acquired in step S02 and the environment acquired in step S03. It is checked whether it is compatible with the environment, and an appropriate learned model is proposed (step S04).
  • the proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
  • FIG. 8 is an example of a screen for learning model proposal.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the learned model database 23 of FIG. Is retrieved the learned model “Bunruki002” matches. Therefore, here, an example is shown in which “Bunruki002” is proposed as a learned model. As shown in the link 801 in FIG.
  • the download URL may be displayed and the user may select it so that the proposed learned model can be downloaded immediately.
  • the learned model proposal system may be terminated by selecting the end button 802.
  • the screen may return to the question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment shown in FIGS.
  • a learned model whose purpose matches and whose environment is close may be output as a proposal or reference.
  • the present invention by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
  • FIG. 2 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions.
  • the camera 100 includes an imaging unit 10, a control unit 110, a communication unit 120, and a storage unit 130.
  • the computer 200 includes a control unit 210, a communication unit 220, a storage unit 230, and an input / output unit 240.
  • the control unit 210 implements the purpose acquisition module 211 and the environment acquisition module 212 in cooperation with the communication unit 220, the storage unit 230, and the input / output unit 240. Further, the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230.
  • the communication network 300 may be a public communication network such as the Internet or a dedicated communication network, and enables communication between the camera 100 and the computer 200.
  • the camera 100 is an imaging device including an imaging device such as an imaging element and a lens that can perform data communication with the computer 200, and captures an image to be analyzed.
  • a WEB camera is illustrated as an example, but an imaging apparatus having necessary functions such as a digital camera, a digital video, a camera installed in a drone, a wearable device camera, a security camera, an in-vehicle camera, and a 360-degree camera. It may be.
  • the captured image may be stored in the storage unit 130.
  • the camera 100 may be a stereo camera, in which case the distance to the subject group can be measured.
  • the camera 100 may be provided with a light intensity sensor, and in that case, the ambient light intensity can be measured.
  • the camera 100 includes, as the imaging unit 10, an imaging device such as a lens, an imaging device, various buttons, and a flash, and captures images as captured images such as moving images and still images.
  • An image obtained by imaging is a precise image having an amount of information necessary for image analysis.
  • the control unit 110 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • a device for enabling communication with other devices for example, a WiFi (Wireless Fidelity) compatible device compliant with IEEE 802.11 or an IMT-2000 standard such as a third generation or fourth generation mobile communication system. Compliant wireless device etc. It may be a wired LAN connection.
  • WiFi Wireless Fidelity
  • the storage unit 130 includes a data storage unit such as a hard disk or a semiconductor memory, and stores necessary data such as captured images. The purpose of image analysis and the shooting environment of the image may be stored together.
  • the computer 200 is a computing device capable of data communication with the camera 100.
  • a desktop computer is illustrated as an example, but in addition to a mobile phone, a portable information terminal, a tablet terminal, a personal computer, electrical appliances such as a netbook terminal, a slate terminal, an electronic book terminal, and a portable music player Or a wearable terminal such as a smart glass or a head-mounted display.
  • the control unit 210 includes a CPU, RAM, ROM, and the like.
  • the control unit 210 realizes the purpose acquisition module 211 and the environment acquisition also by the communication unit 220, the storage unit 230, and the input / output unit 240.
  • a device for enabling communication with other devices as the communication unit 220 for example, a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • a WiFi compatible device compliant with IEEE802.11 or a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • a wireless device compliant with the IMT-2000 standard such as a third generation or fourth generation mobile communication system Etc. It may be a wired LAN connection.
  • the storage unit 230 includes a data storage unit using a hard disk or a semiconductor memory, and stores data necessary for processing such as captured images, teacher data, and image analysis results. Further, the storage unit 230 includes a learned model database 23.
  • the input / output unit 240 has functions necessary to use the learned model proposal system.
  • the input / output unit 240 implements the learned model proposal module 241 in cooperation with the control unit 210 and the storage unit 230.
  • a liquid crystal display that realizes a touch panel function, a keyboard, a mouse, a pen tablet, a hardware button on the apparatus, a microphone for performing voice recognition, and the like can be provided.
  • forms such as a liquid crystal display, a PC display, a projection on a projector, and an audio output can be considered.
  • the function of the present invention is not particularly limited by the input / output method.
  • FIG. 3 is a flowchart of the learned model proposal process. Processing executed by each module described above will be described in accordance with this processing.
  • a plurality of learned models are stored in the learned model database 23 of the computer 200 (step S301).
  • the learned model may be acquired from another computer or a storage medium, or may be created by the computer 200. Further, this step S301 can be omitted when a sufficiently learned model is already stored in the learned model database 23.
  • FIG. 9 shows an example of the configuration of the learned model database.
  • the learned model refers to predetermined learning data (teacher data) composed of past images and correct data, a learned classifier learned from the predetermined learning data, a learned convolutional neural network, and the like And machine learning techniques.
  • the conversion method is also included in the learned model together with the machine learning method.
  • the learned model database 23 for each learned model, the purpose of image analysis and the environment in which the image was captured are stored in association with each other.
  • detecting the entry of a suspicious person for example, detecting the entry of a suspicious person (suspicious person detection), appropriately detecting the harvest time of crops (crop detection), and detecting the occurrence of pests ( Pest detection) is considered.
  • conditions such as location, size, camera position, lighting, etc. can be considered. For example, for location, whether indoors, outdoors (city), outdoors (farm), for area, how many square meters or how ha, for camera position, in the corner of the ceiling, in the center of the ceiling, on the desk / shelf
  • the lighting it is possible to select whether it is a fluorescent lamp, an LED, natural light, or not, etc.
  • machine learning is performed using predetermined learning data (teacher data) composed of past images and correct answer data.
  • the machine learning method used here is preferably suitable for image analysis.
  • Machine learning techniques include convolutional neural networks (CNN), perceptrons, recurrent neural networks (RNN), residual networks (ResNet), and other neural networks, support vector machines (SVN), and naive Bayes classifiers. It is done.
  • Examples of the conversion method for converting an image into a feature vector include Bug of Visual Words, HOG (Histgram of Oriented Gradients), ORB, SURF, and the like.
  • the learned model is stored in the learned model database 23, predetermined learning data composed of past images and correct answer data, a learned classifier learned from the predetermined learning data, and a learned classifier And machine learning techniques such as convolutional neural networks.
  • the conversion method is also included in the learned model together with the machine learning method.
  • a plurality of machine learning methods may be tried for certain learning data, and only the method having the best image analysis result may be stored in the learned model database 23.
  • a learned model is created by the computer 200, it takes time for learning. Therefore, it is performed when sufficient time and CPU can be spent for learning, such as before operation of the learned model proposal system. It is desirable.
  • the purpose acquisition module 211 of the computer 200 transmits a purpose transmission request in order to acquire the purpose of what the image analysis is to be performed for (Step S302).
  • FIG. 3 illustrates a case where the purpose is acquired by transmitting the purpose from the camera 100 as a method for acquiring the purpose.
  • the destination for transmitting the target transmission request may be another terminal or the like (not shown). Further, instead of transmitting the purpose transmission request, a question may be presented to the input / output unit 240 of the computer 200.
  • the camera 100 receives the purpose transmission request from the computer 200 and transmits the purpose through the communication unit 120 (step S303).
  • the destination to which the purpose transmission request is transmitted is another terminal or the like (not shown)
  • the other terminal transmits the purpose to the computer 200.
  • the timing when the user confirms the input to the question corresponds to the purpose transmission.
  • the purpose acquisition module 211 of the computer 200 acquires the purpose (step S304).
  • the acquisition source may be the camera 100, another terminal (not shown), or the input / output unit 240 of the computer 200 in accordance with the purpose transmission request destination in step S ⁇ b> 302.
  • the environment acquisition module 212 of the computer 200 transmits an environment transmission request in order to acquire an environment for capturing an image to be subjected to image analysis (step S305).
  • FIG. 3 illustrates a case where an environment is transmitted from the camera 100 and acquired as an environment acquisition method.
  • the destination for transmitting the target transmission request may be another terminal or the like (not shown). Further, instead of transmitting the environment transmission request, a question may be presented to the input / output unit 240 of the computer 200.
  • the camera 100 receives the environment transmission request from the computer 200 and transmits the environment via the communication unit 120 (step S306).
  • the camera 100 is a stereo camera, the size or the like may be determined by analyzing the distance to the subject and transmitted as an environment.
  • the camera 100 includes a special sensor such as a light intensity sensor, the location, illumination, and the like may be determined from the sensor value and transmitted as an environment.
  • the destination of sending the environment transmission request is another terminal or the like (not shown)
  • the other terminal transmits the environment to the computer 200.
  • the timing when the user confirms the input to the question corresponds to environment transmission.
  • the environment acquisition module 212 of the computer 200 acquires the environment (step S307).
  • the acquisition destination may be the camera 100, another terminal (not shown), or the input / output unit 240 of the computer 200, in accordance with the environment transmission request destination in step S305.
  • the environment acquired by the environment acquisition module 212 may directly acquire the location, size, camera position, illumination, etc.
  • the location is obtained by using it. It may be determined whether the camera is indoors or outdoors, and when the camera is a stereo camera, the distance and area to the subject may be determined using the camera.
  • FIG. 6 is an example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the purpose of the image analysis and the question about the shooting environment of the image are displayed to the user, and the user is selected or input to specify the purpose and environment.
  • the example of FIG. 6 shows an example in which suspicious person detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment indoors, outdoors (city), and outdoors (farm) are presented for places, and indoors are selected. For example, 20 square meters is entered for the area.
  • the ceiling corner, the center of the ceiling, and the desk / shelf are presented and the center of the ceiling is selected.
  • the fluorescent lamp, LED, and natural light are presented and the LED is selected. Is shown.
  • the search button 601 the input to the question is confirmed, and the purpose acquisition module 211 and the environment acquisition module 212 complete the acquisition.
  • questions about the purpose of image analysis and the shooting environment of an image are presented on one screen, but different screens may be used.
  • FIG. 7 is another example of a question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment.
  • the example of FIG. 7 shows an example in which crop detection is selected by presenting suspicious person detection, crop detection, and pest detection as the purpose of image analysis.
  • an image capturing environment for example, indoors, outdoors (city), and outdoors (farm) are presented as places, and the outdoors (farm) is selected.
  • the camera position the outside of the building, the utility pole, and the drone are presented to show the example where the utility pole is selected, and the lighting is shown as none, yes, and no is selected.
  • FIG. 7 shows an example in which the options presented as the image capturing environment are changed because the purpose of image analysis is different from the example of FIG.
  • by changing the shooting environment options in accordance with the selected items such as the purpose of image analysis and the location, it is possible to easily perform input according to the purpose and location.
  • the learned model proposal module 241 of the computer 200 refers to the learned model database 23 to determine which purpose of the learned model is the purpose acquired in step S304 and the environment acquired in step S307. It searches for suitability with the environment and proposes an appropriate learned model (step S308).
  • the proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
  • FIG. 8 is an example of a screen for learning model proposal.
  • This screen may be displayed on the input / output unit 240 of the computer 200 or may be displayed on the input / output unit of another terminal (not shown) used by the user.
  • the learned model database 23 of FIG. and “Bunruki002” is proposed as a suitable learned model.
  • a download URL may be indicated and the proposed learned model may be immediately downloaded by selecting it.
  • the learned model suggestion system may be terminated by selecting the end button 802. Further, by selecting the button 801 to the search screen, the screen may return to the question presentation and input screen for acquiring the purpose of image analysis and the image capturing environment shown in FIGS. In addition, when a suitable learned model is not found, a learned model whose purpose matches and whose environment is close may be output as a proposal or reference. By using the proposed learned model, the user can obtain an accurate image analysis result suitable for the purpose without spending learning time.
  • the present invention by acquiring and using the purpose of image analysis and the environment in which the image is captured, an existing learned model similar to that is proposed, and the accuracy is high without taking the learning time. It is possible to provide a learned model proposal system, a learned model proposal method, and a program that can obtain an image analysis result.
  • FIG. 4 is a diagram illustrating the functional blocks of the camera 100 and the computer 200 and the relationship between the functions when performing image comparison.
  • the control unit 210 of the computer 200 implements the image acquisition module 213 in cooperation with the communication unit 220 and the storage unit 230. Further, the control unit 210 implements the image comparison module 214 in cooperation with the storage unit 230.
  • FIG. 5 is a flowchart of learned model proposal processing when image comparison is performed. Processing executed by each module described above will be described in accordance with this processing. Since the processing from step S501 to step S507 in FIG. 5 corresponds to the processing from step S301 to step S307 in FIG. 3, only step S508 and subsequent steps will be described. Note that the processing in step S501 can be omitted when a sufficiently learned model is already stored in the learned model database 23, as in step S301.
  • the image acquisition module 213 of the computer 200 transmits an image transmission request to the camera 100 in order to acquire an image to be subjected to image analysis (step S508).
  • the camera 100 receives an image transmission request from the computer 200 and transmits an image via the communication unit 120 (step S509).
  • the camera 100 may not only transmit an image captured in real time but also transmit an image captured by the camera 100 in the past and stored in the storage unit 130.
  • the image acquisition module 213 of the computer 200 acquires an image to be analyzed from the camera 100 (step S510).
  • the environment acquisition module 212 analyzes the image acquired in step S510, and analyzes the location, area, and size. The camera position, lighting, etc. may be determined.
  • the image comparison module 214 of the computer 200 obtains the image to be analyzed in step S510 and the image of the predetermined learning data (teacher data) of the learned model stored in the learned model database 23. Are compared (step S511).
  • the image of the predetermined learning data of the learned model used for comparison not all the images of one learned model may be used, but one or a plurality of images may be picked up and used.
  • the image to be analyzed is similar to the image of the predetermined learning data of the learned model, the accuracy when performing the image analysis using the learned model is improved, Shall be proposed.
  • this comparison work may be performed to confirm that the image is similar to the image of the learned model that matches the purpose and the environment, and when there are multiple learned models that match the purpose and the environment, This may be done to suggest or refine a completed model, or it may be used to select a learned model to be proposed from a learned model that matches only the purpose when there is no learned model that matches the purpose and environment. Good. When a suitable learned model is not found, there may be no suitable learned model, or a learned model that matches the purpose and has the closest environment may be proposed.
  • the learned model proposal module 241 of the computer 200 proposes an appropriate learned model based on the image comparison result in step S511 (step S512).
  • the proposal of the learned model here may be output to the input / output unit 240 of the computer 200 or may be output to the input / output unit of another terminal (not shown) used by the user.
  • FIG. 8 shows an example of the learned model proposal screen as described above.
  • an existing learned model similar to that is proposed by acquiring and using the image to be analyzed.
  • the means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program.
  • the program may be, for example, in the form (SaaS: Software as a Service) provided from a computer via a network, or a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD). -RAM, etc.) and a computer-readable recording medium such as a compact memory.
  • the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Le problème à résoudre par la présente invention est de fournir un système de suggestion de modèle entraîné, un procédé de suggestion de modèle entraîné et un programme suggérant un modèle entraîné capable de délivrer en sortie des résultats d'analyse d'image précis d'une image souhaitée devant être nouvellement soumise à une analyse d'image, sans consacrer du temps à l'apprentissage. La solution selon l'invention porte sur un système de suggestion de modèle entraîné comprenant : une base de données de modèle entraîné 23 qui associe et stocke des modèles entraînés pour une analyse d'image avec des objectifs et des environnements ; un module de détermination d'objectif 211 qui détermine l'objectif pour lequel une image doit être analysée ; un module de détermination d'environnement 212 qui détermine l'environnement dans lequel l'image est capturée ; et un module de suggestion de modèle entraîné 241 qui se réfère à la base de données de modèle entraîné 23 et suggère un modèle entraîné correspondant à l'objectif et à l'environnement.
PCT/JP2018/020288 2018-05-28 2018-05-28 Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme WO2019229789A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/020288 WO2019229789A1 (fr) 2018-05-28 2018-05-28 Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme
JP2020521646A JP7068745B2 (ja) 2018-05-28 2018-05-28 学習済モデル提案システム、学習済モデル提案方法、およびプログラム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/020288 WO2019229789A1 (fr) 2018-05-28 2018-05-28 Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme

Publications (1)

Publication Number Publication Date
WO2019229789A1 true WO2019229789A1 (fr) 2019-12-05

Family

ID=68697263

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/020288 WO2019229789A1 (fr) 2018-05-28 2018-05-28 Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme

Country Status (2)

Country Link
JP (1) JP7068745B2 (fr)
WO (1) WO2019229789A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626950A (zh) * 2020-05-19 2020-09-04 上海集成电路研发中心有限公司 一种图像去噪模型的在线训练装置及方法
WO2021261140A1 (fr) * 2020-06-22 2021-12-30 株式会社片岡製作所 Dispositif de traitement de cellules, dispositif d'apprentissage et dispositif de proposition de modèle appris
US20220100987A1 (en) * 2020-09-28 2022-03-31 Yokogawa Electric Corporation Monitoring device, learning apparatus, method and storage medium
WO2022064631A1 (fr) * 2020-09-25 2022-03-31 日本電気株式会社 Système d'analyse d'image et procédé d'analyse d'image
JP7305850B1 (ja) 2022-06-30 2023-07-10 菱洋エレクトロ株式会社 機械学習を利用したシステム、端末、サーバ、方法、及び、プログラム

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240029126A (ko) * 2022-08-26 2024-03-05 한국전자기술연구원 설치환경에 최적화된 딥러닝 모델 생성 시스템 및 방법, 이의 학습 데이터 구성 방법

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017159614A1 (fr) * 2016-03-14 2017-09-21 オムロン株式会社 Dispositif de fourniture de services d'apprentissage
WO2018078862A1 (fr) * 2016-10-31 2018-05-03 株式会社オプティム Système d'analyse d'image, procédé d'analyse d'image, et programme

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017159614A1 (fr) * 2016-03-14 2017-09-21 オムロン株式会社 Dispositif de fourniture de services d'apprentissage
WO2018078862A1 (fr) * 2016-10-31 2018-05-03 株式会社オプティム Système d'analyse d'image, procédé d'analyse d'image, et programme

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626950A (zh) * 2020-05-19 2020-09-04 上海集成电路研发中心有限公司 一种图像去噪模型的在线训练装置及方法
WO2021261140A1 (fr) * 2020-06-22 2021-12-30 株式会社片岡製作所 Dispositif de traitement de cellules, dispositif d'apprentissage et dispositif de proposition de modèle appris
WO2022064631A1 (fr) * 2020-09-25 2022-03-31 日本電気株式会社 Système d'analyse d'image et procédé d'analyse d'image
US20220100987A1 (en) * 2020-09-28 2022-03-31 Yokogawa Electric Corporation Monitoring device, learning apparatus, method and storage medium
JP2022055229A (ja) * 2020-09-28 2022-04-07 横河電機株式会社 監視用デバイス、学習装置、方法およびプログラム
US11881048B2 (en) 2020-09-28 2024-01-23 Yokogawa Electric Corporation Monitoring device, learning apparatus, method and storage medium
JP7305850B1 (ja) 2022-06-30 2023-07-10 菱洋エレクトロ株式会社 機械学習を利用したシステム、端末、サーバ、方法、及び、プログラム
JP7398587B1 (ja) 2022-06-30 2023-12-14 菱洋エレクトロ株式会社 機械学習を利用したシステム、端末、サーバ、方法、及び、プログラム
JP2024005989A (ja) * 2022-06-30 2024-01-17 菱洋エレクトロ株式会社 機械学習を利用したシステム、端末、サーバ、方法、及び、プログラム

Also Published As

Publication number Publication date
JPWO2019229789A1 (ja) 2021-06-24
JP7068745B2 (ja) 2022-05-17

Similar Documents

Publication Publication Date Title
WO2019229789A1 (fr) Système de suggestion de modèle entraîné, procédé de suggestion de modèle entraîné, et programme
CN106255866B (zh) 通信系统、控制方法以及存储介质
US11450353B2 (en) Video tagging by correlating visual features to sound tags
US20200089661A1 (en) System and method for providing augmented reality challenges
CN109635621A (zh) 用于第一人称视角中基于深度学习识别手势的系统和方法
WO2019156332A1 (fr) Dispositif de production de personnage d'intelligence artificielle pour réalité augmentée et système de service l'utilisant
KR20200076169A (ko) 놀이 컨텐츠를 추천하는 전자 장치 및 그의 동작 방법
CN111492374A (zh) 图像识别系统
JPWO2018142756A1 (ja) 情報処理装置及び情報処理方法
KR102646344B1 (ko) 이미지를 합성하기 위한 전자 장치 및 그의 동작 방법
JPWO2018100678A1 (ja) コンピュータシステム、エッジデバイス制御方法及びプログラム
US11030479B2 (en) Mapping visual tags to sound tags using text similarity
US20190012347A1 (en) Information processing device, method of processing information, and method of providing information
JP2010224715A (ja) 画像表示システム、デジタルフォトフレーム、情報処理システム、プログラム及び情報記憶媒体
US20200112838A1 (en) Mobile device that creates a communication group based on the mobile device identifying people currently located at a particular location
US20190289360A1 (en) Display apparatus and control method thereof
CN104486548A (zh) 一种信息处理方法及电子设备
US9992407B2 (en) Image context based camera configuration
US10965915B2 (en) Collection system, program for terminal, and collection method
US11677836B2 (en) Server apparatus, communication system and communication method
KR20200013164A (ko) 전자 장치, 및 전자 장치의 제어 방법
CN106412469B (zh) 投影系统、投影装置与投影系统的投影方法
CN112346566A (zh) 一种交互学习方法、装置、智能学习设备及存储介质
JP6267840B1 (ja) コンピュータシステム、エッジデバイス制御方法及びプログラム
JP2020042528A (ja) オブジェクト識別システム、モデル学習システム、オブジェクト識別方法、モデル学習方法、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921005

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2020521646

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921005

Country of ref document: EP

Kind code of ref document: A1