WO2020149242A1 - Dispositif d'aide au travail, procédé d'aide au travail, programme et modèle de détection d'objet - Google Patents
Dispositif d'aide au travail, procédé d'aide au travail, programme et modèle de détection d'objet Download PDFInfo
- Publication number
- WO2020149242A1 WO2020149242A1 PCT/JP2020/000731 JP2020000731W WO2020149242A1 WO 2020149242 A1 WO2020149242 A1 WO 2020149242A1 JP 2020000731 W JP2020000731 W JP 2020000731W WO 2020149242 A1 WO2020149242 A1 WO 2020149242A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- work support
- probability
- detection model
- support device
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present disclosure relates to a work support device, a work support method, a program, and an object detection model.
- Patent Document 1 Japanese Patent Laid-Open No. 2018-169672
- an insufficient pattern is identified by identifying an insufficient pattern with a small number of teacher images and spatially inverting or changing the color tone of a certain teacher image.
- a technique for generating a new teacher image belonging to is disclosed.
- the work support device is a work support device that supports a work for setting whether or not an object is shown in an area in an image.
- the work support device includes an extraction unit and a generation unit.
- the extraction unit includes an area in which the probability that the object is imaged is equal to or greater than a first threshold value from an arbitrary moving image by using the object detection model constructed by using the teacher image in which the object is imaged. Extract the image as a candidate image.
- the generation unit generates coordinate information of a region in the candidate image in which the probability that the object is captured is equal to or higher than the first threshold value.
- the work support device includes an extraction unit and an output unit.
- the extraction unit includes an area in which the probability that the object is imaged is equal to or greater than a first threshold value from an arbitrary moving image by using the object detection model constructed by using the teacher image in which the object is imaged. Extract the image as a candidate image.
- the output unit outputs an image in which a bounding box corresponding to a region in which the probability that the object is captured is equal to or higher than the first threshold is combined with the candidate image.
- FIG. 3 is a partially enlarged view of FIG. 2.
- 5 is a schematic diagram for explaining the concept of information processing by the processing unit 24.
- FIG. 6 is a flowchart for explaining the operation of the work support device 20.
- 9 is a flowchart for explaining the operation of the work support device 20 according to Modification A.
- FIG. 13 is a schematic diagram for explaining end-to-end learning according to modification B. It is a schematic diagram which shows an example of the display screen of the work assistance apparatus 20 which concerns on modification C, D.
- FIG. 1 is a schematic diagram showing the configuration of the work support device 20 according to the present embodiment.
- FIG. 2 is a diagram showing an example of the candidate image Gk output by the work support device 20.
- FIG. 3 is a partially enlarged view of a broken line portion of FIG.
- FIG. 4 is a schematic diagram for explaining the concept of information processing by the processing unit 24 described later.
- the work support device 20 is a device that supports the work of generating the teacher image Gt used for the object detection model 21M.
- Gt is used to collectively describe a plurality of teacher images
- Gt1 is used with a subscript when the individual teacher images are described separately.
- the “object detection model 21M” is constructed by a neural network in which weights are adjusted based on the teacher image Gt on which the object O is copied, and the area extraction of the object O in the image and the extraction of the object O are performed. Perform object recognition. Specifically, the object detection model 21M calculates the probability that the object O appears in the image when the image is input, and if the calculated probability is a predetermined value or more, the object O Outputs the area where is appearing.
- the target object detection model 21M can detect a plurality of preset target objects.
- the area of the object O is defined by coordinate information b1 to b4 corresponding to the four vertices of the bounding box B combined in the image Gk. Therefore, in the teacher image Gt of the object detection model 21M, the object O is imaged in the area corresponding to the coordinate information b1 to b4 of the bounding box B.
- a “traffic light” is shown as the object O, but the object O is not limited to this.
- the object O any object can be adopted.
- the object O can be set not only by the type of object but also by the state or the like.
- the object O may be set not only as a traffic light but also as a traffic light that displays a red traffic light and a traffic light that displays a blue traffic light.
- the work support device 20 can be realized by any computer, and includes a storage unit 21, an input unit 22, an output unit 23, and a processing unit 24.
- the work support device 20 may be realized as hardware using an LSI (Large Scale Integration), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like.
- the storage unit 21 stores various kinds of information, and is realized by an arbitrary storage device such as a memory and a hard disk.
- the storage unit 21 stores information such as the weight of the neural network that constructs the object detection model 21M.
- the storage unit 21 stores a plurality of teacher images Gt, and stores a plurality of teacher images Gt1 to Gtp (p is a natural number other than 1) in the initial state.
- the storage unit 21 also stores a moving image GD for generating a new teacher image Gtq (q is a value other than 1 to p).
- the moving image GD is captured by an arbitrary image capturing device.
- the input unit 22 is realized by an arbitrary input device such as a keyboard, a mouse, a touch panel, etc., and inputs various information to a computer.
- the output unit 23 is realized by any output device such as a display, a touch panel, a speaker, etc., and outputs various information from the computer.
- the processing unit 24 executes various types of information processing, and is realized by a processor such as a CPU or GPU and a memory.
- the processing unit 24 causes the extraction unit 24A, the generation unit 24B, the synthesis unit 24C, the setting unit 24D, by reading one or a plurality of programs stored in the storage unit 21 into the CPU, GPU, or the like of the computer.
- the updating unit 24E functions as the updating unit 24E.
- each function of the processing unit 24 will be described with reference to FIG.
- the extraction unit 24A uses the object detection model 21M, and the probability that the object O is captured from the images Gd1 to Gdj of each frame of the arbitrary moving image GD is the first threshold P1 or more and the second threshold.
- An image including a region that is P2 or less is extracted as a candidate image Gk (denoted as Gk1 to Gk3 in FIG. 4).
- the first threshold P1 is set to about 10%
- the second threshold P2 is set to about 60%.
- the generation unit 24B generates coordinate information of an area in the candidate image Gk in which the probability that the object O is captured is equal to or higher than the first threshold P1.
- the generation unit 24B also generates a file in which coordinate information b1 to b4 of the vertices of the bounding box B (denoted as B1 to B3 in FIG. 4) corresponding to the area in which the object O is imaged is described. ..
- the coordinate information b1 to b4 can be defined by the two-dimensional coordinates of each vertex. However, not limited to this, the coordinate information b1 to b4 may be defined by the two-dimensional coordinates of one vertex and the width and height from the vertex, assuming that the bounding box B is a square or a rectangle.
- a file in which eight values corresponding to four vertices on the two-dimensional coordinates are described is generated.
- a file in which a total of four values including two values corresponding to one vertex on the two-dimensional coordinates and two values indicating the width and height from the vertex is generated is generated.
- the combining unit 24C combines the bounding box B with the candidate image Gk and displays it on the display of the output unit 23.
- the setting unit 24D sets, via the input unit 22, whether or not the object O is shown in the bounding box B combined with the candidate image Gk.
- the object O is shown in the bounding boxes B1 and B3 of the candidate images Gk1 and Gk3, but the object P other than the object O is shown in the bounding box B2 of the candidate image Gk2.
- the setting unit 24D sets, via the input unit 22, "candidate images Gk1 and Gk3 are target images in which the target O is captured" (U1, U3).
- the setting unit 24D makes a setting via the input unit 22 that "the candidate image Gk2 is not a target image in which the target O is captured" (U2).
- the setting unit 24D generates an arbitrary bounding box B in the image by designating the coordinates, and sets that the object O is a target image in which the target O is captured using the bounding box B. Is also possible.
- the updating unit 24E updates the target object detection model 21M by adding an image in which whether or not the target object O is set to the current teacher image Gt and readjusting the weight of the neural network.
- FIG. 5 is a flowchart for explaining the operation of the work support device 20 according to the present embodiment.
- a moving image GD of the surrounding environment of the image capturing device is captured by an arbitrary image capturing device.
- these moving images GD are stored in the storage unit 21 of the work support device 20 in a timely manner (S1).
- the work support apparatus 20 When the probability that the object O is copied is within the above range, the work support apparatus 20 outputs the image of that frame as a “candidate image Gk” (S4-Yes, S5). Specifically, as shown in FIGS. 2 and 3, an image in which the bounding box B is combined in the area where the object O is displayed is displayed on the display constituting the output unit 23.
- the worker determines whether the candidate image Gk is a target image in which the target O is captured (S6).
- the work support apparatus 20 accepts the setting as to whether the candidate image Gk is the target image, via the input unit 22 and the setting unit 24D. For example, when the images as shown in FIGS. 2 and 3 are displayed as the candidate image Gk, the “traffic light”, which is the object O, is shown in the bounding box B, and therefore the operator does not need to perform additional work.
- the setting that the image is a physical image can be accepted (S6-Yes, S7).
- the target object O when the target object O is not shown in the candidate image Gk, it means that an object other than the target object O is erroneously recognized as a “traffic light” and is extracted, and thus the bounding of the erroneously recognized object is performed.
- the setting that the image is not the object image is accepted (S6-No, S8).
- the extraction unit 24A uses the object detection model 21M, and the probability that the object O is copied from an arbitrary moving image GD is the first.
- An image including a region having a threshold value P1 or more and a second threshold value P2 or less is extracted as a candidate image Gk.
- the generation unit 24B generates coordinate information b1 to b4 of a region in the candidate image Gk in which the probability that the object O is photographed is not less than the first threshold P1 and not more than the second threshold P2.
- the work support device 20 also includes a setting unit 24D.
- the setting unit 24D sets that the object O is imaged in the bounding box B or that the object O is not imaged in the bounding box B through the operation of the input unit 22 by the operator. Accept.
- the worker can efficiently perform the work of setting whether or not the object O is imaged in the image Gdi of each frame of the moving image GD.
- the worker only needs to confirm whether or not the object O is imaged in the area corresponding to the bounding box B in the candidate image Gk in which the object O is displayed with a certain probability of appearance.
- the image showing the object O can be used as a new teacher image Gtq (see FIG. 4).
- the work support device 20 it is possible to efficiently collect a large number of teacher images Gt.
- the extraction unit 24A extracts, as the candidate image Gk, an image including a region in which the probability that the object O is copied is equal to or higher than the first threshold P1. Therefore, an image in which the object O is not copied is excluded. It will be. In other words, the extraction unit 24A does not extract a noise image as the candidate image Gk. As a result, the setting of whether or not the object O is a captured image is made efficient.
- the extraction unit 24A extracts, as a candidate image Gk, an image in which the probability that the object O is captured is equal to or less than the second threshold value P2. As a result, it is possible to efficiently collect a new teacher image that contributes to the improvement of the detection accuracy of the object detection model 21M.
- the weight of the target object detection model 21M is updated by adding a target image that can be detected with high probability using the current teacher image group Gt1 to Gtp as a new teacher image to the current teacher image group Gt1 to Gtp.
- a teacher image often does not contribute to improvement in detection accuracy of the object detection model 21M.
- an object image that cannot be detected with high probability using the current teacher image group Gt1 to Gtp is added to the target object detection model 21M.
- the work support apparatus 20 When the weight is updated, a significant change occurs in the feature amount of the target object O extracted from the current teacher image group Gt1 to Gtp, and the detection accuracy of the target object detection model 21M is improved.
- the work support apparatus 20 extracts an image whose detection accuracy does not increase in the current teacher image groups Gt1 to Gtp as the candidate image Gk, thereby detecting the detection accuracy of the object detection model 21M. It is possible to efficiently collect new teacher images that contribute to improvement.
- the work support device 20 further includes an updating unit 24E.
- the updating unit 24E adds an image in which whether or not the target object O is captured to the current teacher image Gt, adjusts the weight of the neural network, and updates the target object detection model 21M.
- the accuracy of detecting the object O in the object detection model 21M is improved according to the use of the work support device 20. As a result, it becomes possible to provide an object detection model with high detection accuracy.
- the object detection model 21M can detect a plurality of objects.
- the setting unit 24D can also accept a change in the setting of the target object corresponding to the bounding box B. Specifically, it is set that the second object, not the first object, is imaged in the candidate image that is output as the probability that the first object is imaged is greater than or equal to the first threshold value. be able to. For example, when a bounding box B indicating that the probability that a traffic light displaying a green light is displayed is greater than or equal to a first threshold value and less than or equal to a second threshold value is displayed, a red light signal is captured in the actual candidate image Gk. If so, the user can set through the input unit 22 and the setting unit 24D that the candidate image Gk includes a traffic light displaying a red traffic light.
- the extraction unit 24A may stop the extraction of the candidate image Gk when the change amount of the previously extracted image is equal to or less than the predetermined amount.
- the extraction unit 24A according to the modification A stores the previously extracted image as the reference image Gc.
- the extraction unit 24A includes an image including a region in which the probability that the object O is imaged is equal to or more than the first threshold P1 and equal to or less than the second threshold P2.
- the extraction unit 24A stops extracting the image as a candidate image Gk when the change amount of the image Gdi of one frame forming the moving image GD from the reference image Gc is equal to or less than a predetermined amount.
- the work support device 20 according to this modification A executes the operation shown in the flowchart of FIG.
- steps T1 to T4, T6 to T8, and T10 to T12 execute the same processing as steps S1 to S9 described above, respectively.
- steps T5 and T9 are added.
- step T5 the candidate image Gk is extracted only when the amount of change from the reference image Gc is larger than the predetermined amount.
- step T9 when the object image is newly set, the object image is set as a new reference image Gc.
- the work support apparatus 20 does not collect the candidate images Gk that do not contribute to the improvement in the detection accuracy of the object detection model 21M.
- an image in which the amount of change from the reference image Gc is less than or equal to a predetermined amount is an image similar to the reference image Gc. Therefore, such an image is added as a new teacher image to the current teacher image groups Gt1 to Gtp.
- the weight of the object detection model 21M is updated, there is often no significant change in the feature amount of the object O extracted from the current teacher image groups Gt1 to Gtp. That is, such a teacher image often does not contribute to the improvement of the detection accuracy of the object detection model 21M.
- the object detection model 21M can be quickly constructed while reducing the calculation load.
- the work support device 20 according to the modification A can efficiently collect the candidate images Gk that contribute to the improvement of the detection accuracy of the object detection model 21M.
- the object detection model 21M is constructed by a neural network that performs area extraction of the object O and object recognition of the object O end-to-end. It may be one. With such a configuration, the object O can be detected at high speed, and the object O can be detected in real time.
- end-to-end means, as shown in the concept of FIG. 7A, a neural network having an appropriate structure for the processing of region extraction of the object O and object recognition of the object O. It means learning the input/output relations directly through the network.
- an object detection model 21M can be realized by using an algorithm such as YOLO (You Only Look Once) or SSD (Single Shot Multi Box Detector).
- the object detection model 21M is not limited to this, and as shown in the concept of FIG. 7B, is constructed by a combination of an algorithm and a neural network for individually extracting the area of the object O and recognizing the object of the object O. It may be done.
- the work support device 20 synthesizes an image showing the type of the target object and the value of the probability that the target object O is photographed with the candidate image Gk and outputs the candidate image Gk. It may be one. Accordingly, the worker can easily recognize what the target object O shown in the candidate image Gk is.
- the image corresponding to the bounding box B is a traffic light that displays a red traffic light (denoted as Red_light in FIG. 8 ), and the probability that it is a traffic light that displays a red traffic light. Is 43.21%.
- the area indicated by the symbol M is displayed near the corresponding bounding box B.
- the work support device 20 uses the object detection model 21M, and the probability that the object O is imaged from an arbitrary moving image GD is equal to or higher than the first threshold value and equal to or lower than the second threshold value.
- An image including a certain area may be stored as a candidate image Gk in a folder divided for each object. Further, the work support device 20 may output the candidate image Gk stored for each folder together with the bounding box B.
- an operator opens a folder in which a plurality of candidate images in which a traffic light displaying a green signal is displayed is accumulated and continuously outputs the images in the folder so that the traffic light displaying the green signal in those candidate images. It is possible to efficiently determine whether or not is captured. Further, when the operator continuously confirms the images in the folder, the operator clicks an icon, which is indicated by symbol I2 in FIG. Can be displayed. Here, when the icon I2 is clicked, the next image is displayed, and at the same time, the candidate image Gk being displayed is set to include a traffic light displaying a green signal. In short, the worker can perform the annotation work for generating the teacher image Gt used for the object detection model 21M by simply clicking the icon I2 while continuously checking the images. Note that the symbol I1 in FIG. 8 is an icon that means return to the front, and when this icon I1 is clicked, the previously displayed candidate image is displayed.
- the present disclosure is not limited to the above embodiments as they are.
- the present disclosure can be embodied by modifying the constituent elements within the scope not departing from the gist of the present invention at the implementation stage.
- the present disclosure can form various disclosures by appropriately combining a plurality of constituent elements disclosed in each of the above-described embodiments. For example, some components may be deleted from all the components shown in the embodiment. Further, the constituent elements may be appropriately combined with different embodiments.
- the work support apparatus of the first aspect is a work support apparatus that supports a work for setting whether or not an object is imaged in a region within an image.
- the work support device includes an extraction unit and a generation unit.
- the extraction unit includes an area in which the probability that an object is imaged is equal to or higher than a first threshold value from an arbitrary moving image by using an object detection model constructed using a teacher image in which the object is imaged. Extract the image as a candidate image.
- the generation unit generates coordinate information of an area in the candidate image in which the probability that the object is captured is equal to or higher than the first threshold value.
- a work support apparatus is the work support apparatus according to the first aspect, in which the object detection model is constructed by a neural network that performs end-to-end object region extraction and object object recognition. Is. With such a configuration, it is possible to speed up the detection of the target object.
- the work support apparatus is the work support apparatus according to the first aspect or the second aspect, wherein the extraction unit selects an image including a region in which the probability that the object is captured is equal to or less than a second threshold as a candidate image. To extract. With such a configuration, it is possible to efficiently collect candidate images that contribute to improving the detection accuracy of the object detection model.
- the work support apparatus is the work support apparatus according to the first to third aspects, and the extraction unit extracts the candidate image when the change amount from the previously extracted candidate image is less than or equal to a predetermined amount. To stop. As a result, the collection of candidate images that do not contribute to the improvement in the detection accuracy of the object detection model is stopped. As a result, it is possible to efficiently collect candidate images that contribute to improving the detection accuracy of the object detection model.
- the work support apparatus is the work support apparatus according to the first to fourth aspects, and further includes an updating unit.
- the updating unit updates the target object detection model by adding an image in which whether or not the target object is set to the teacher image. With such a configuration, it is possible to efficiently collect the candidate images that contribute to the improvement of the detection accuracy of the object according to the use.
- the sixth aspect of the object detection model is the object detection model updated by the work support apparatus of the fifth aspect. Therefore, an object detection model with high detection accuracy can be provided.
- the program according to the seventh aspect causes a computer to function as a work support device that supports a work for setting whether or not an object is shown in an area in an image.
- This program causes a computer to function as an extraction unit and a generation unit.
- the extraction unit includes an area in which the probability that the object is imaged is equal to or greater than a first threshold value from an arbitrary moving image by using the object detection model constructed by using the teacher image in which the object is imaged. Extract the image as a candidate image.
- the generation unit generates coordinate information of a region in the candidate image in which the probability that the object is captured is equal to or higher than the first threshold value.
- the work support method of the eighth aspect is a method of using a computer to support work for setting whether or not an object is shown in an area within an image.
- the probability that the object is copied from an arbitrary moving image is equal to or higher than a first threshold value by using an object detection model constructed using a teacher image in which the object is copied.
- An image including a region is extracted as a candidate image.
- the coordinate information of the region in the candidate image in which the probability that the object is photographed is the first threshold value or more is generated. Therefore, according to this work support method, it can be efficiently set whether or not the target object is a captured target object image. As a result, a large number of teacher images can be efficiently collected.
- the work support apparatus includes an extraction unit and an output unit.
- the extraction unit includes an area in which the probability that the object is imaged is equal to or greater than a first threshold value from an arbitrary moving image by using the object detection model constructed by using the teacher image in which the object is imaged. Extract the image as a candidate image. Further, the output unit outputs an image in which a bounding box corresponding to a region in which the probability that the object is captured is equal to or higher than the first threshold is combined with the candidate image.
- the work support apparatus is the work support apparatus according to the ninth aspect, and accepts a setting that an object is imaged in the bounding box or that the object is not imaged in the bounding box.
- a setting unit is further provided. Through this setting unit, it can be efficiently set whether or not the target object is a captured target object image.
- the work support apparatus of the eleventh aspect is the work support apparatus of the ninth or tenth aspect, and the object detection model detects a plurality of objects.
- the work support device replaces the first target object with the second target object for the image in which the bounding box corresponding to the region in which the probability that the first target object is captured is equal to or higher than the first threshold value is combined. It further includes a setting unit that accepts the setting that the is captured. With such a configuration, correction can be easily performed when the object is erroneously detected.
- the work support apparatus is the work support apparatus according to the ninth to eleventh aspects, and the object detection model detects a plurality of objects.
- the work support device uses the object detection model to classify, from any moving image, an image including a region in which the probability that the object is captured is equal to or more than a first threshold value as a candidate image, for each object. Stored in the specified folder. With this, by continuously displaying the images accumulated in each folder, it is only necessary to confirm whether or not the target object is imaged, which reduces the burden on the operator.
- the work support apparatus is the work support apparatus according to the ninth to twelfth aspects, and combines the value of the probability that the object is photographed with the candidate image and outputs it. This allows the operator to easily recognize what the target object is in the candidate image.
- the work support apparatus is the work support apparatus according to the ninth to thirteenth aspects, wherein the extraction unit selects an image including a region in which the probability that the object is captured is equal to or less than the second threshold as a candidate image. To extract. With such a configuration, it is possible to efficiently collect candidate images that contribute to improving the detection accuracy of the object detection model.
- the work support apparatus is the work support apparatus according to the ninth to fourteenth aspects, wherein the extraction unit extracts the candidate image when the amount of change from the previously extracted candidate image is less than or equal to a predetermined amount. To stop. As a result, it is possible to efficiently collect candidate images that contribute to improving the detection accuracy of the object detection model.
- a work support apparatus is the work support apparatus according to the ninth to fifteenth aspects, in which an image in which whether or not an object is photographed is set to a teacher image and an object detection model is set. An updating unit for updating is further provided. With such a configuration, it is possible to efficiently collect the candidate images that contribute to the improvement of the detection accuracy of the object according to the use.
- the program of the seventeenth aspect causes a computer to function as an extraction unit and an output unit.
- the extraction unit includes an area in which the probability that the object is imaged is equal to or greater than a first threshold value from an arbitrary moving image by using the object detection model constructed by using the teacher image in which the object is imaged. Extract the image as a candidate image.
- the output unit outputs, to the candidate image, an image in which a bounding box corresponding to a region in which the probability that the object is captured is equal to or higher than the first threshold value is combined.
- the work support method is a method of using a computer to support work for setting whether or not an object (O) is imaged in a region within an image.
- the probability that the object is copied from an arbitrary moving image is equal to or higher than a first threshold value by using an object detection model constructed using a teacher image in which the object is copied.
- An image including a region is extracted as a candidate image.
- an image in which a bounding box corresponding to a region in which the probability that an object is imaged is equal to or higher than a first threshold is combined is output to the candidate image. Therefore, according to this work support method, by maintaining the display of the bounding box or deleting it, it is possible to efficiently set whether or not the object is the image of the object. As a result, a large number of teacher images can be efficiently collected.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
La présente invention peut déterminer efficacement si une image est une image dans laquelle apparaît un objet. Un dispositif d'aide au travail 20 est pourvu d'une unité d'extraction 24A et d'une unité de sortie 23. L'unité d'extraction 24A utilise un modèle de détection d'objet 21M, qui est construit à l'aide d'une image d'apprentissage Gt dans laquelle un objet O apparaît, pour extraire en tant qu'image candidate Gk à partir d'une image animée arbitraire GD une image contenant une région pour laquelle la probabilité que l'objet apparaisse est supérieure ou égale à une première valeur seuil. L'unité de sortie 23 fournit une image dans laquelle une boîte de délimitation B correspondant à la région pour laquelle la probabilité que l'objet apparaisse est supérieure ou égale à la première valeur seuil a été combinée à l'image candidate pertinente.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-006450 | 2019-01-17 | ||
JP2019006450A JP6508797B1 (ja) | 2019-01-17 | 2019-01-17 | 作業支援装置、作業支援方法、プログラム、及び対象物検知モデル。 |
JP2019-066075 | 2019-03-29 | ||
JP2019066075A JP6756961B1 (ja) | 2019-03-29 | 2019-03-29 | 作業支援装置、作業支援方法、プログラム、及び対象物検知モデル。 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020149242A1 true WO2020149242A1 (fr) | 2020-07-23 |
Family
ID=71613028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/000731 WO2020149242A1 (fr) | 2019-01-17 | 2020-01-10 | Dispositif d'aide au travail, procédé d'aide au travail, programme et modèle de détection d'objet |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020149242A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023170912A1 (fr) * | 2022-03-11 | 2023-09-14 | 日本電気株式会社 | Dispositif de traitement d'informations, procédé de génération, procédé de traitement d'informations et support lisible par ordinateur |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07160661A (ja) * | 1993-12-02 | 1995-06-23 | Hitachi Ltd | ニューラルネットワークの教師データ自動抽出方法と、それを用いたニューラルネットワークシステム、並びに、プラント運転支援装置 |
JP2013232181A (ja) * | 2012-04-06 | 2013-11-14 | Canon Inc | 画像処理装置、画像処理方法 |
JP2015191334A (ja) * | 2014-03-27 | 2015-11-02 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
JP2017162025A (ja) * | 2016-03-07 | 2017-09-14 | 株式会社東芝 | 分類ラベル付与装置、分類ラベル付与方法、およびプログラム |
JP2018151833A (ja) * | 2017-03-13 | 2018-09-27 | パナソニック株式会社 | 識別器学習装置および識別器学習方法 |
JP2018151843A (ja) * | 2017-03-13 | 2018-09-27 | ファナック株式会社 | 入力画像から検出した対象物の像の尤度を計算する画像処理装置および画像処理方法 |
US20180348346A1 (en) * | 2017-05-31 | 2018-12-06 | Uber Technologies, Inc. | Hybrid-View Lidar-Based Object Detection |
-
2020
- 2020-01-10 WO PCT/JP2020/000731 patent/WO2020149242A1/fr active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07160661A (ja) * | 1993-12-02 | 1995-06-23 | Hitachi Ltd | ニューラルネットワークの教師データ自動抽出方法と、それを用いたニューラルネットワークシステム、並びに、プラント運転支援装置 |
JP2013232181A (ja) * | 2012-04-06 | 2013-11-14 | Canon Inc | 画像処理装置、画像処理方法 |
JP2015191334A (ja) * | 2014-03-27 | 2015-11-02 | キヤノン株式会社 | 情報処理装置、情報処理方法 |
JP2017162025A (ja) * | 2016-03-07 | 2017-09-14 | 株式会社東芝 | 分類ラベル付与装置、分類ラベル付与方法、およびプログラム |
JP2018151833A (ja) * | 2017-03-13 | 2018-09-27 | パナソニック株式会社 | 識別器学習装置および識別器学習方法 |
JP2018151843A (ja) * | 2017-03-13 | 2018-09-27 | ファナック株式会社 | 入力画像から検出した対象物の像の尤度を計算する画像処理装置および画像処理方法 |
US20180348346A1 (en) * | 2017-05-31 | 2018-12-06 | Uber Technologies, Inc. | Hybrid-View Lidar-Based Object Detection |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023170912A1 (fr) * | 2022-03-11 | 2023-09-14 | 日本電気株式会社 | Dispositif de traitement d'informations, procédé de génération, procédé de traitement d'informations et support lisible par ordinateur |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110242130A1 (en) | Image processing apparatus, image processing method, and computer-readable medium | |
CN113312125B (zh) | 多窗口调整方法、系统、可读存储介质和电子设备 | |
WO2020149242A1 (fr) | Dispositif d'aide au travail, procédé d'aide au travail, programme et modèle de détection d'objet | |
Goodwin et al. | Key detection for a virtual piano teacher | |
JP6623851B2 (ja) | 学習方法、情報処理装置および学習プログラム | |
US20140310248A1 (en) | Verification support program, verification support apparatus, and verification support method | |
JP6756961B1 (ja) | 作業支援装置、作業支援方法、プログラム、及び対象物検知モデル。 | |
CN112365944A (zh) | 一种树状数据节点处理系统、方法、电子设备及存储介质 | |
CN108156504B (zh) | 一种视频显示方法及装置 | |
JP6508797B1 (ja) | 作業支援装置、作業支援方法、プログラム、及び対象物検知モデル。 | |
JP2015099564A (ja) | 視覚プログラミング装置およびその制御方法 | |
JPWO2023195139A5 (ja) | 支援装置、操作システム、表示データ作成装置、支援方法、表示データ作成方法、支援プログラム及び表示データ作成プログラム | |
US20230023972A1 (en) | Apparatus, method and computer-readable storage medium for detecting objects in a video signal based on visual evidence using an output of a machine learning model | |
JP6487100B1 (ja) | 帳票処理装置及び帳票処理方法 | |
JP2011198006A (ja) | オブジェクト検出装置、オブジェクト検出方法、およびオブジェクト検出プログラム | |
CN115273215A (zh) | 作业识别系统以及作业识别方法 | |
JPWO2015093231A1 (ja) | 画像処理装置 | |
US20230196566A1 (en) | Image annotation system and method | |
US20230196725A1 (en) | Image annotation system and method | |
TW201822132A (zh) | 基於試卷圖像的試題生成系統及其方法 | |
JP6490852B1 (ja) | 不読判定閾値設定方法及び不読判定閾値設定装置 | |
JP4730033B2 (ja) | 表示図面作成プログラム、方法及び装置 | |
JP2017135663A (ja) | 画像処理装置、画像処理方法及びプログラム | |
US20140351685A1 (en) | Method and apparatus for interactive review of a dataset | |
JP4663526B2 (ja) | 帳票作成支援装置、帳票作成支援方法、および帳票作成支援プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20741973 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20741973 Country of ref document: EP Kind code of ref document: A1 |