WO2014141534A1 - Comparison system, terminal device, server device, comparison method, and program - Google Patents

Comparison system, terminal device, server device, comparison method, and program Download PDF

Info

Publication number
WO2014141534A1
WO2014141534A1 PCT/JP2013/080637 JP2013080637W WO2014141534A1 WO 2014141534 A1 WO2014141534 A1 WO 2014141534A1 JP 2013080637 W JP2013080637 W JP 2013080637W WO 2014141534 A1 WO2014141534 A1 WO 2014141534A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
input
displayed
unit
Prior art date
Application number
PCT/JP2013/080637
Other languages
French (fr)
Japanese (ja)
Inventor
陽三 平木
正 安達
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to CN201380074556.9A priority Critical patent/CN105008251B/en
Priority to JP2015505229A priority patent/JP6123881B2/en
Publication of WO2014141534A1 publication Critical patent/WO2014141534A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Definitions

  • the present invention relates to a collation system, a terminal device, a server device, a collation method, and a program.
  • slabs thin or thin plate processed
  • billets cylindrical or prismatic
  • blooms shallow-shaped
  • beam blanks A shape close to an H-shape.
  • These steel materials such as slabs are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address.
  • the manager manages the storage state of a plurality of steel materials by using storage information indicating which identification information (eg, lot number) of steel material is stored at which address and what level.
  • the storage information is updated when an event occurs in which a new steel material is added to the storage location, the stored steel material is shipped, or the storage address is moved to another address. If such storage information is used, when a steel material having certain identification information is shipped, the storage location of the steel material can be easily specified.
  • the identification information attached to the surface of the steel material actually stored in the location specified using the storage information matches the identification information of the steel material to be shipped.
  • Work verification work
  • collation work is performed using the identification information (eg: printed) attached to the surface of the steel material stored at the specified position and the storage information. Check that the stored information is correct.
  • Patent Document 1 discloses a camera for photographing certificates for acquiring image data of certificates transmitted by a customer who wants to loan to a loan examination apparatus, and a frame line for dividing each description item in the certificates Among them, there is disclosed a certificate photographing camera characterized by having photographing frame display means for displaying a photographing frame that matches at least a part of the frame line on a finder.
  • collation work at the time of shipment and at a predetermined timing as described above has been performed manually. That is, the operator visually compares storage information with identification information (for example, printed) attached to the surface of a steel material stored at a predetermined position. Such an operation is very troublesome and requires a lot of time. In addition, human error may occur.
  • This invention makes it a subject to provide the technique for performing efficiently the collation operation
  • Storage means for storing correspondence information; Input accepting means for accepting input of the address of the steel material to be verified and the step information; Corresponding information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input receiving means; A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder, Imaging means for capturing the image displayed on the viewfinder; Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark; Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means; A verification system is provided.
  • a terminal device including the input receiving unit, the output unit, and the imaging unit included in the verification system.
  • a viewfinder displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image
  • a terminal device comprising: a transmission unit that transmits only a partial image within the specific frame in the image captured by the imaging unit to an external device.
  • a viewfinder displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image
  • Output means for displaying on the viewfinder, Imaging means for capturing the image displayed on the viewfinder;
  • a terminal device comprising: transmission means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means.
  • a server device comprising the storage means, the correspondence information search means, and the collation means that the collation system has.
  • a program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder, Computer A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder.
  • a program for functioning as a server is provided.
  • a program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder, Computer A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder.
  • a program for functioning as a server is provided.
  • a program for a collation system that collates a plurality of steel materials that are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address
  • Computer The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other.
  • Storage means for storing correspondence information; An input receiving means for receiving the address of the steel material to be verified and the input of the step information; Correspondence information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input reception means; A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image.
  • Output means Imaging means for capturing the image displayed on the viewfinder; Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted.
  • Image recognition means for recognizing the identification information using the identification mark; Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means; A program for functioning as a server is provided.
  • a collation method for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned with an address Computer
  • the identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other.
  • a pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image.
  • Image recognition processing is performed using only a partial image in the specific frame in the image captured in the imaging step, and identification marks written on the surfaces of the plurality of steel materials are extracted and extracted.
  • system and apparatus of this embodiment include a CPU (Central Processing Unit), a memory, and a program loaded in the memory (a program stored in the memory in advance from the stage of shipping the apparatus, a CD (Including programs downloaded from storage media such as (Compact Disc), servers on the Internet, etc.), storage units such as hard disks that store the programs, and any combination of hardware and software, centering on the network connection interface It is realized by. It will be understood by those skilled in the art that there are various modifications to the implementation method and apparatus.
  • each device is described as being realized by one device, but the means for realizing it is not limited to this. That is, it may be a physically separated configuration or a logically separated configuration.
  • the inventors of the present invention have studied a technique for realizing, with a computer, verification work of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address.
  • First storage information indicating which identification information (lot number or the like) of steel material is stored at which address and what level is managed by electronic data.
  • image recognition technology is used for the collation work. Specifically, first, an image of an identification mark (for example, printed) attached to the surface of a steel material to be verified is imaged. Then, an identification mark is extracted from an image captured using an image recognition technique, and identification information is recognized using the identification mark.
  • the storage information is searched using the position of the steel material to be verified (information indicating the address and the number of steps) as a key, and the identification information associated with the key is acquired. Then, it is verified whether the identification information recognized using the image recognition technology matches the identification information acquired from the storage information. According to such a technique, human error can be eliminated. However, in the case of this technique, the following problems peculiar to the collation work of steel materials may occur.
  • Steel imaging needs to be done at the steel storage location. That is, the steel material cannot be moved for imaging.
  • steel materials which may be stored outdoors or indoors.
  • the steel material may be stored in an environment that is not preferable for imaging, such as in a low illuminance environment. In such a case, if the image recognition process is performed using the captured image as it is, there is a possibility that sufficient recognition accuracy cannot be obtained.
  • a means for imaging the steel material by changing the setting on the camera side to be suitable for each environment can be considered. However, busy workers in the field want to avoid such troublesome changes in camera settings. In addition, it takes time to change the camera settings, and the work efficiency of the entire collation work deteriorates.
  • FIG. 1 shows an example of a functional block diagram of the matching system 1 of the present embodiment.
  • the collation system 1 of the present embodiment includes a storage unit 11, an input reception unit 12, a correspondence information search unit 13, an imaging unit 14, an image recognition unit 15, a collation unit 16, and an output unit. 17.
  • the collation system 1 of the present embodiment may be realized by a single device (for example, a mobile terminal device), or may be realized by two or more devices configured to be able to communicate with each other by wire and / or wirelessly. Good. That is, one apparatus may include all the units shown in FIG. Alternatively, each of the two or more devices may include at least a part of the units illustrated in FIG. 1, and the matching system 1 including all the units illustrated in FIG. 1 may be realized by combining them.
  • An embodiment in which the verification system 1 is realized by two or more devices will be described in the following embodiment.
  • the collation system 1 of this embodiment is a system for collating a plurality of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address.
  • FIG. 2 shows an example of a plurality of areas each assigned an address.
  • AA1 to AF4 are addresses.
  • the plurality of areas do not necessarily have to be regularly arranged as shown in the figure, and may have a random positional relationship.
  • each of the plurality of areas may be located away from each other.
  • the environments may be different from each other, with some areas being indoors and other areas being outdoors.
  • the shape of the area to show in figure is an example, and is not limited to a square.
  • Fig. 3 shows an example of a plurality of steel materials that are stacked and stored in a plurality of stages.
  • plate-like slabs are stacked in five stages.
  • the shape of the steel material is not particularly limited.
  • the shape of the steel material may be a plate shape as shown in the figure, or may be other shapes such as a square shape and a rod shape.
  • the number of steels stacked on top of each other is a design matter.
  • each steel material may be mounted on a predetermined mounting member (for example, mounting table), and a plurality of steel materials may be laminated together with the mounting member.
  • a predetermined mounting member for example, mounting table
  • an identification mark indicating each identification information is written on the surface of each steel material.
  • an identification mark may be printed on the surface of each steel material by a machine.
  • the label for example, the label which printed the identification mark produced with the computer
  • the identification mark may be stuck on the surface of each steel material.
  • Any form that can be recognized by a conventional image recognition technique can be adopted as the identification mark.
  • the identification mark may be identification information itself made up of alphanumeric characters or the like as shown in the figure, or may be a barcode or a two-dimensional code.
  • identification information consisting of alphanumeric characters is described in one column, but may be described in two columns and three columns.
  • Such a description form can be changed according to the surface shape of steel materials. However, it is preferable to adopt the same description form for the same type of steel materials (the same shape, size, components, etc.).
  • the storage unit 11 shown in FIG. 1 includes identification information for each of a plurality of stored steel materials, an address of an area in which each of the steel materials is stored, and stage information indicating positions in a steel material group stacked in a plurality of stages. Is stored in correspondence information.
  • the stage information may be, for example, information indicating the number of stages counted from the bottom, or information indicating the number of stages counted from the top. In the following, it is assumed that the stage information is information indicating the number of stages counted from the bottom.
  • FIG. 4 shows an example of correspondence information. According to the correspondence information shown in the figure, it can be seen that the steel material of the identification information “20130101AB001” is stored in the first row from the bottom of the AA1 address.
  • the correspondence information is updated when an event occurs in which a steel material to be stored is newly added, a steel material to be stored is shipped, or an address to be stored is moved to another address.
  • the content of the correspondence information may be updated based on a human input operation.
  • the input receiving unit 12 receives the input of the steel material address and step information to be verified. For example, when an operator ships a steel material stored at a certain number of steps at a certain address, this steel material is used as a verification target. Then, the worker inputs the address and stage information indicating the number of stages. In addition, the operator may sequentially check a plurality of steel materials in order to check whether there is an error in the correspondence information stored in the storage unit 11 at a timing such as inventory.
  • FIG. 5 shows an example of a user interface for the input receiving unit 12 to receive input of the steel material address and step information to be verified.
  • an address and stage information can be selected by a pull-down menu.
  • the example of the user interface shown in the figure is merely an example, and the present invention is not limited to this (the premise is the same for all user interfaces shown below).
  • a user interface using other GUI (graphical user interface) components may be used.
  • the means by which the input receiving unit 12 receives an input is not particularly limited, and can be realized using any input device such as a touch panel display, an input button, a microphone, a keyboard, and a mouse.
  • the correspondence information search unit 13 refers to the correspondence information (see FIG. 4) stored in the storage unit 11, and is associated with the address and stage information at which the input reception unit 12 has accepted the input. Get the identification information of steel.
  • the output unit 17 has a finder.
  • the viewfinder can be composed of a display, for example.
  • the viewfinder may be a touch panel display, for example.
  • An image captured by the imaging unit 14 described below (an image before imaging) and / or an image captured by the imaging unit 14 (an image captured) is displayed on the finder.
  • an imaging instruction is input while an image is displayed on the finder
  • the image displayed on the finder is captured and imaging data is stored.
  • a captured image is displayed on the viewfinder using the stored imaging data.
  • the output unit 17 displays on the viewfinder a specific frame indicating a partial area to be subjected to image recognition processing in the image displayed on the viewfinder.
  • the image recognition process corresponds to an image recognition process executed by the image recognition unit 15 described below.
  • the output unit 17 is configured to be capable of executing at least one of (1) a process of displaying a specific frame superimposed on an image before imaging, and (2) a process of displaying a specific frame superimposed on a captured image. Is done. Below, the output part 17 demonstrates as what performs the process which superimposes and displays a specific frame on the image before imaging (1).
  • FIG. 6 shows an example in which the output unit 17 displays a specific frame superimposed on an image on a display (finder).
  • a part of steel materials (see FIG. 3) stored in a plurality of stages is displayed as an image before imaging. More specifically, the identification mark part of the steel materials stacked and stored in a plurality of stages is displayed.
  • the specific frame F is displayed over the image.
  • an imaging instruction input e.g., a touch of a shooting button
  • an image displayed on the display 100 is captured.
  • the target of the image recognition process is not all the images displayed on the display 100 but only the image in the specific frame F.
  • the shape of the specific frame F is not limited to a square and may be other shapes.
  • at least one of the size and shape of the specific frame F and the display position in the display 100 may be changed according to the input of the operator.
  • a touch panel type display capable of recognizing a plurality of points
  • There are methods such as touching and dragging.
  • a method touch and slide
  • the imaging unit 14 captures an image displayed on the display 100.
  • the operator inputs an imaging instruction in a state where the identification mark of the steel material to be collated is within the specific frame F. It is preferable to input an imaging instruction in a state where the identification marks of the steel materials other than the verification target are not included in the specific frame F.
  • the imaging unit 14 associates information (specific frame position information) indicating the position of a specific frame at the time of imaging (eg, a position in the image data, a position in the display 100) with the image data of the captured image.
  • the image recognition unit 15 performs image recognition processing using only a partial image within a specific frame in the image captured by the imaging unit 14.
  • the image recognition unit 15 acquires the image data of the image captured by the imaging unit 14, the image recognition unit 15 specifies a partial image in the specific frame F using specific frame position information associated with the image data. Then, image recognition processing is performed using only the image data of the specified image.
  • the image recognition process includes a process of extracting the identification mark written on the surface of the steel material from the image to be processed, and a process of recognizing identification information using the extracted identification mark. Details of the image recognition process are not particularly limited, and any conventional technique can be applied.
  • the image recognizing unit 15 holds feature information (feature amount) indicating the feature of the identification mark in advance, and can perform identification mark extraction and authentication processing using the feature information.
  • the image recognition unit 15 executes various processes such as noise removal, smoothing, sharpening, two-dimensional filtering processing, binarization, thinning, normalization (enlargement / reduction, parallel movement, rotation movement, density change, etc.). can do. Note that it is not always necessary to execute all of the processes exemplified here.
  • the collation unit 16 determines whether the identification information acquired by the correspondence information search unit 13 matches the identification information recognized by the image recognition unit 15.
  • the output unit 17 can display the determination result by the verification unit 16 on the display 100.
  • FIG. 7 shows an example in which the output unit 17 displays the determination result on the display 100.
  • the address and step information of the steel material to be verified are displayed on the display 100, and the determination result (authentication result) by the verification unit 16 and the recognition result of the image recognition process by the image recognition unit 15 are displayed. Has been.
  • the input receiving unit 12 receives the input of the address and step information of the steel material to be verified (S1). For example, the input receiving unit 12 receives an input of address and stage information via a user interface as shown in FIG. 5 displayed on the display 100 by the output unit 17. Here, it is assumed that the input receiving unit 12 receives an input “AA1” and the “2” level from the bottom.
  • the correspondence information search unit 13 searches the correspondence information stored in the storage unit 11 using the address and step information received by the input reception unit 12 as keys, and acquires the steel material identification information associated with the key. (S2).
  • the correspondence information is searched for the correspondence information shown in FIG. 4 using the combination of “AA1” and “2” as a key, and the identification information “20130101AB002” is acquired.
  • the collation system 1 enters the imaging mode.
  • the order of the transition / execution to the imaging mode and the processing of S2 is not limited to the order shown in FIG. 8, and these may be executed in parallel.
  • the display on the display 100 is switched.
  • the output unit 17 displays the image to be imaged on the display 100 and displays the specific frame F so as to overlap the image (see FIG. 6).
  • the operator adjusts the position, orientation, and the like of the collation system 1 to display the identification mark attached to the surface of the steel material to be collated on the display 100 and place the identification mark in the specific frame F.
  • the worker inputs an imaging instruction (for example, touching the shooting button) while maintaining the state.
  • the imaging unit 14 captures an image displayed on the display 100.
  • the imaging unit 14 then associates information (specific frame position information) indicating the position of the specific frame F at the time of imaging (eg, position in the image data, position in the display 100) with the image data of the captured image ( S3).
  • information specific frame position information
  • the imaging unit 14 captures an image in the state illustrated in FIG.
  • the image recognition unit 15 performs image recognition processing using only a partial image within the specific frame F in the image captured by the imaging unit 14.
  • the image recognition unit 15 extracts the identification mark written on the surface of each of the plurality of steel materials by the image recognition process (S4), and recognizes the identification information using the extracted identification mark (S5).
  • the image recognition unit 15 has recognized the identification information “20130101AB002”.
  • the collation unit 16 determines (collation) whether the identification information acquired by the correspondence information search unit 13 in S2 matches the identification information recognized by the image recognition unit 15 in S5 (S6). And the output part 17 outputs the collation result of the collation part 16 in S6 (S7). For example, the output unit 17 outputs a collation result as shown in FIG.
  • the recognition of the image recognition part 15 in S5 is "20130101AB00?"
  • the identification information recognized in S5 is not correctly recognized, and as a result, does not match the identification information acquired in S2, so that the collation result of the collation unit 16 is inconsistent (NG).
  • the output unit 17 outputs a collation result as shown in FIG. According to the display, the operator can recognize that the collation result is NG because the recognition accuracy of the image recognition process is insufficient.
  • the worker visually recognizes the recognition mark attached to the steel material to be verified (steel material stored at a predetermined stage at a predetermined address) and inputs the visually recognized identification information (input receiving unit 12). You may comprise so that it can do.
  • the collation unit 16 determines (collates) whether the identification information received by the input receiving unit 12 matches the identification information acquired by the correspondence information search unit 13 in S2.
  • touching “to input screen” on the user interface shown in FIG. 9 may cause a transition to the input screen shown in FIG. In this screen, the recognition result “20130101AB00” by the image recognition unit 15 is displayed as an initial value, and the last numeric part that could not be recognized is blank.
  • the input receiving unit 12 may receive an input of identification information from such a user interface, for example.
  • the verification system 1 accepts user input to select one of the captured images.
  • the user images in advance the identification mark attached to the surface of the steel material to be verified, and stores the image data. And here, the image in which the identification mark attached
  • the collation system 1 executes a process of displaying the specific frame F on the display 100 by superimposing the selected image on the captured image instead of shifting to the imaging mode after S2.
  • the user changes at least one of the size and shape of the specific frame F and the display position in the display 100 as necessary, and puts the identification mark in the specific frame F.
  • the worker performs imaging input (for example, touching the imaging button) while maintaining the state.
  • the imaging unit 14 includes, in the image data of the image displayed on the display 100, information indicating the position of the specific frame F at the time of receiving the imaging input (eg, position in the image data, position in the display 100).
  • Data associated with (specific frame position information) is created (imaging processing) and saved (S3).
  • the processing after S4 is the same as the above example.
  • the process flow is applicable to all the following embodiments.
  • collation system of the present embodiment it is possible to realize a collation operation of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned with an address by processing by a computer. For this reason, it is possible to avoid occurrence of human error.
  • the collation system 1 of the present embodiment is configured to be able to solve such a problem. That is, the collation system 1 of the present embodiment does not perform image recognition processing using all the captured images, but performs image recognition processing using only the images specified by the specific frame F in the captured images. For this reason, the amount of data to be processed can be reduced. As a result, the processing time can be shortened.
  • the verification system 1 of the present embodiment it is possible to efficiently and accurately perform the verification work of the steel materials stacked and stored in a plurality of stages in each of the plurality of areas each assigned with an address. It becomes possible.
  • the verification system 1 of this embodiment is different from the verification system 1 of the first embodiment in that a plurality of steel materials stored at the same address can be targeted for verification at a time.
  • this embodiment will be described.
  • omission is demonstrated suitably.
  • FIG. 1 An example of a functional block diagram of the collation system 1 of the present embodiment is shown in FIG. 1 as in the first embodiment.
  • the input reception part 12 can receive the input of the address of the steel material of collation object, for example using a user interface as shown in FIG.
  • the correspondence information searching unit 13 searches the correspondence information stored in the storage unit 11 and is associated with the address at which the input receiving unit 12 has received the input. Information (thing with which the identification information of steel materials is matched) is acquired. Thereafter, the output unit 17 displays a list of the stage information acquired by the correspondence information search unit 13.
  • FIG. 12 shows an example in which the output unit 17 displays a list of column information on the display 100.
  • five pieces of step information (circle 1 to circle 5) are displayed.
  • the number of the stage information displayed in a list here corresponds to the number of steel materials managed to be stored at the address in the correspondence information. That is, in the case of the example shown in FIG. 12, it is managed that five steel materials are stacked and stored in the AA1 address in five stages in correspondence information.
  • the operator can find an error existing in the correspondence information by comparing the number displayed in the list with the number of steel materials actually stored at the address.
  • the number of laminated steel materials is sufficiently small (e.g., in the single digit range), it is difficult to cause inconvenience that a confirmation error by an operator occurs.
  • the input receiving unit 12 receives an input of selecting one or a plurality of pieces of step information displayed in a list, and receives input of step information of one or a plurality of steel materials to be verified.
  • one or a plurality of pieces of stage information can be selected by checking a check box displayed in association with each piece of stage information.
  • the output unit 17 displays the same number (one or more) of specific frames F as the number of stage information received by the input receiving unit 12 on the display 100.
  • each unit when the input receiving unit 12 receives input of a plurality of pieces of stage information and the output unit 17 displays a plurality of specific frames F on the display 100 will be described.
  • the structure of each part can be made the same as that of 1st Embodiment.
  • FIG. 13 shows an example in which the output unit 17 displays a plurality of specific frames F on the display 100.
  • a part of steel materials (see FIG. 3) that are stacked and stored in a plurality of stages are displayed.
  • two specific frames F1 and F2 are displayed so as to overlap the image.
  • Each of the plurality of specific frames F1 and F2 displayed on the display 100 is associated with each piece of stage information received by the input receiving unit 12.
  • the output unit 17 may display a plurality of specific frames F1 and F2 so that the associated stage information can be identified.
  • the circled numbers displayed in the upper left corner of each of the specific frames F1 and F2 indicate the stage information associated with each. That is, it can be identified that the specific frame F1 is associated with the stage information of the third stage from the bottom, and the specific frame F2 is associated with the stage information of the fourth stage from the bottom. Using such information, the operator can grasp how many steel material identification marks should be placed in each of the plurality of specific frames F1 and F2.
  • FIG. 14 shows another example.
  • auxiliary frames G1 to G3 are displayed above and below the specific frames F1 and F2 in addition to the specific frames F1 and F2.
  • the stage information can be identified by the positions in the plurality of frame groups including the specific frames F1 and F2 and the auxiliary frames G1 to G3.
  • the specific frame F1 shown in FIG. 14 is located at the third level from the bottom in the frame group. From this, it can be seen that the step information associated with the specific frame F1 is the third step from the bottom.
  • the auxiliary frames G1 to G3 may have the same design as the specific frames F1 and F2 and may differ only in shape and size, or may have different designs as shown in FIG.
  • the auxiliary frames G1 to G3 can be made smaller than the specific frames F1 and F2. In this way, it is possible to reduce the inconvenience that the visibility of the image to be captured displayed on the display 100 by the auxiliary frames G1 to G3 is impaired.
  • the specific frames F1 and F2 displayed on the display 100 can individually change at least one of the display position, shape, and size in the display 100 in accordance with a user input. Good.
  • the display position, shape, and size of one specific frame F may be similarly changed.
  • a touch panel type display capable of recognizing a plurality of points
  • a touch panel type display that can recognize only one point
  • a method such as touching and dragging the intersection of one side or two sides constituting a specific frame to change the display size.
  • a method touch and slide
  • information indicating the position of the specific frame F at the time of image capturing (eg, position in the image data, position in the display 100) in the image data of the captured image.
  • specific frame position information indicating the positions of the plurality of specific frames F is associated with the image data.
  • Each of the plurality of specific frame position information is associated with stage information associated with the specific frame F.
  • the image recognition unit 15 When a plurality of specific frame position information is associated with the image data of the image captured by the image capturing unit 14, the image recognition unit 15 performs image recognition using only each partial image specified by each specific frame position information. Process. Then, the image recognition unit 15 obtains a plurality of recognition results (identification information). The contents of the image recognition process are the same as in the first embodiment. Each of the plurality of pieces of identification information recognized by the image recognition unit 15 is associated with step information associated with each specific frame position information.
  • the correspondence information search unit 13 searches the correspondence information stored in the storage unit 11, and selects a plurality of identification information associated with the address at which the input reception unit 12 has received the input and any one of the plurality of stage information. get.
  • the collation unit 16 determines whether or not the plurality of identification information acquired by the correspondence information search unit 13 and the plurality of identification information recognized by the image recognition unit 15 match. Specifically, identification information with which the corresponding stage information matches is compared to determine whether or not they match. And the collation part 16 obtains a discrimination
  • FIG. 15 shows an example in which the output unit 17 displays the discrimination result of the collation unit 16 on the display 100.
  • all five steel materials stored at address AA1 are subject to collation, and the respective recognition results and collation results are shown.
  • the steel material in the second stage from the bottom has insufficient accuracy of recognition by the image recognition unit 15, and as a result, the authentication result is NG.
  • the “NG” button is touched in the user interface, the screen shown in FIG. 10 may be displayed. Processing using the user interface shown in FIG. 10 is the same as in the first embodiment.
  • the same operational effects as those of the first embodiment can be realized. Moreover, since several steel materials can be made into the collation object at once, the working efficiency of collation processing improves. Moreover, since the identification marks of each of the plurality of steel materials are identified and identified in each of the plurality of specific frames F, the accuracy of the process of extracting the identification marks from the image can be improved.
  • the verification system 1 of this embodiment is different from the verification system 1 of the first embodiment in that a plurality of steel materials stored at the same address can be targeted for verification at a time.
  • this embodiment will be described.
  • omission is demonstrated suitably.
  • FIG. 1 An example of a functional block diagram of the collation system 1 of the present embodiment is shown in FIG. 1 as in the first embodiment.
  • the input reception part 12 can receive the input of the address of the steel material of collation object, for example using a user interface as shown in FIG. Then, the correspondence information search part 13 searches the correspondence information memorize
  • the correspondence information search unit 13 searches the correspondence information stored in the storage unit 11 and recognizes the number of pieces of steel material identification information associated with the address at which the input reception unit 12 has accepted the input. The number of the first steel materials stored at the address may be specified. Then, the output part 17 displays the 1 or several specific flame
  • each part when the number of the first steel materials is plural will be described.
  • the structure of each part in case the number of 1st steel materials is one can be made the same as that of 1st Embodiment.
  • FIG. 16 shows an example in which the output unit 17 displays the same number of specific frames F as the number of first steel materials on the display 100.
  • the display 100 displays five stacked steel materials.
  • five specific frames F1 to F5 are displayed on the steel material image.
  • Each of the five specific frames F1 to F5 is associated with stage information.
  • the output unit 17 displays a plurality of specific frames F1 to F5 so that the associated stage information can be identified.
  • the first to fifth stages are associated in order from the bottom according to the arrangement order of the five specific frames F1 to F5. That is, the worker can identify the stage information associated with each of the specific frames F1 to F5 based on the arrangement order of the plurality of specific frames F1 to F5. Using such information, the operator can grasp how many steel material identification marks should be placed in each of the plurality of specific frames F1 to F5.
  • the stage information matched with each specific frame can also be identified and displayed by the structure similar to 2nd Embodiment.
  • a cross mark M is displayed in association with each specific frame F.
  • the specific frame associated with the cross mark M may disappear, or may change to the auxiliary frame described in the second embodiment. That is, the input reception part 12 may receive the input of the stage information of the steel materials to be verified by receiving an input for selecting one or more of the plurality of specific frames F1 to F5.
  • the step information corresponding to the specific frame F not touched with the cross mark M in other words, the specific frame F remaining on the display 100 when the input receiving unit 12 receives the imaging instruction input.
  • Corresponding step information is input as step information of the steel material to be verified.
  • the specific frames F1 to F5 displayed on the display 100 can individually change at least one of the display position, shape, and size in the display 100 in accordance with a user input. Good.
  • the display position, shape, and size of one specific frame F may be similarly changed.
  • a touch panel type display capable of recognizing a plurality of points
  • a touch panel display that can recognize only one point, when changing the size, there is a method such as touching and dragging the intersection of one side or two sides that make up a specific frame, and when changing the display position, within the specific frame There is a method such as touching and dragging an arbitrary position of.
  • a display that is not a touch panel type, there is a display using a predetermined button.
  • information indicating the position of the specific frame F at the time of image capturing (eg, position in the image data, position in the display 100) in the image data of the captured image.
  • specific frame position information indicating the positions of the plurality of specific frames F is associated with the image data.
  • Each of the plurality of specific frame position information is associated with stage information associated with the specific frame F.
  • the image recognition unit 15 When a plurality of specific frame position information is associated with the image data of the image captured by the image capturing unit 14, the image recognition unit 15 performs image recognition using only each partial image specified by each specific frame position information. Process. Then, the image recognition unit 15 obtains a plurality of recognition results (identification information). The contents of the image recognition process are the same as in the first embodiment. Each of the plurality of pieces of identification information recognized by the image recognition unit 15 is associated with step information associated with each specific frame position information.
  • the correspondence information search unit 13 searches the correspondence information stored in the storage unit 11, and selects a plurality of identification information associated with the address at which the input reception unit 12 has received the input and any one of the plurality of stage information. get.
  • the collation unit 16 determines whether or not the plurality of identification information acquired by the correspondence information search unit 13 and the plurality of identification information recognized by the image recognition unit 15 match. Specifically, identification information with which the corresponding stage information matches is compared to determine whether or not they match. And the collation part 16 obtains a discrimination
  • FIG. 15 shows an example in which the output unit 17 displays the discrimination result of the collation unit 16 on the display 100.
  • all five steel materials stored at address AA1 are subject to collation, and the respective recognition results and collation results are shown.
  • the steel material in the second level from the bottom has insufficient recognition accuracy by the image recognition unit 15, and as a result, the collation result is NG.
  • the “NG” button is touched in the user interface, the screen shown in FIG. 10 may be displayed. Processing using the user interface shown in FIG. 10 is the same as in the first embodiment.
  • the same operational effects as those of the first and second embodiments can be realized. Moreover, since several steel materials can be made into the collation object at once, the working efficiency of collation processing improves. Moreover, since the identification marks of each of the plurality of steel materials are identified and identified in each of the plurality of specific frames F, the accuracy of the process of extracting the identification marks from the image can be improved.
  • the collation system 1 of this embodiment is based on the configuration of the second embodiment and the third embodiment that can target a plurality of steel materials at one time. That is, the collation system 1 of the present embodiment can display a plurality of specific frames F on the display 100.
  • each of a plurality of identification marks can be accommodated in each of a plurality of specific frames F arranged in a line as shown in FIGS.
  • the identification marks are not aligned in a line, and their positions may vary. In such a case, as shown in the diagram on the left side of FIG. 17, it is impossible to fit each of the plurality of identification marks in each of the plurality of specific frames F arranged in a line.
  • the input receiving unit 12 of the present embodiment can individually move a plurality of specific frames F (the diagram on the right side of FIG. 17).
  • the display position of the specific frame F may be moved by touching and sliding the specific frame F.
  • each of the plurality of identification marks can be contained in each of the plurality of specific frames F.
  • the same operational effects as those of the first to third embodiments can be realized. Further, even when the positions of the identification marks of the plurality of steel materials stacked and stored in a plurality of stages are not aligned and vary, the plurality of identification marks are stored in each of the plurality of specific frames F. be able to.
  • the collation system 1 of this embodiment is based on the configuration of the second embodiment and the third embodiment that can target a plurality of steel materials at one time. That is, the collation system 1 of the present embodiment can display a plurality of specific frames F on the display 100.
  • the collation system 1 of the present embodiment has a fourth problem in the case where the positions of the identification marks of a plurality of steel materials stacked and stored in a plurality of stages are not aligned in one line (eg, one line in the stacking direction) and vary.
  • the configuration is different from that of the embodiment and can be solved.
  • the input receiving unit 12 receives a designation input for designating a part of the plurality of specific frames F displayed on the display 100 and an imaging instruction input for imaging in a state where some of the specific frames F are designated. Accept.
  • the imaging unit 14 captures an image according to the imaging instruction input received by the input receiving unit 12.
  • the image recognition unit 15 performs image recognition processing using only a partial image in the specific frame F designated at the time of image capturing in the image captured by the image capturing unit 14.
  • the output unit 17 may display the specific frame F that has been imaged in the designated state and the specific frame F that has not been imaged in the designated state in an identifiable manner. This will be described in more detail below using specific examples.
  • FIG. 18 shows a display example by the output unit 17.
  • a part of the plurality of steel materials is displayed on the display 100.
  • the positions of the identification marks displayed on each of the plurality of steel materials are not aligned in one row (eg, one row in the stacking direction), but vary.
  • On the display 100 three specific frames F1 to F3 are displayed. Three symbols are written in the upper left corner of each of the specific frames F1 to F3. These three symbols are, in order from the left, “information indicating the associated stage information”, “information indicating whether or not the image has been captured in the specified state”, “whether or not specified. Information ”. “Information indicating the associated stage information” is as described in the second embodiment.
  • “Information indicating whether or not an image has been taken in a specified state” is a character “completed” or “uncompleted”. “Done” indicates that the image has been picked up in the specified state, and “Not” indicates that the image has not been picked up in the specified state. “Information indicating whether or not it is designated” is a check box that can be input by an operator. The specific frame F that is checked is the specified specific frame F, and the specific frame F that is not checked is the specific frame F that is not specified. Note that the specific frame F that has been imaged in the designated state cannot be checked.
  • the worker designates only a part of the identification marks. That is, check the check box.
  • one or a plurality of specific frames F are designated, and a predetermined steel material identification mark is placed in the designated specific frames F.
  • an imaging instruction is input (touching the imaging button) while maintaining the state.
  • the input receiving unit 12 receives an imaging instruction input
  • the imaging unit 14 captures an image, and information (specific frame position information) indicating the position of the specific frame F1 specified at the time of imaging in the image data of the captured image.
  • the input reception part 12 receives the step information matched with the specific flame
  • the step information received by the input receiving unit 12 is associated with the image data of the image.
  • the output unit 17 may continue the screen display illustrated in FIG. 18 even after the imaging unit 14 captures an image in the state illustrated in FIG. However, since the specific frame F1 is captured in the designated state, the “unfinished” character in the upper left corner is replaced with “done”. Further, like the specific frame F3, the check box cannot be selected. By visually recognizing such a display, the operator can recognize that it is the second stage steel material identification mark that has not yet been imaged.
  • the operator can input an instruction to start the collation process (touch the collation button) after imaging one or a plurality of steel identification marks.
  • the input receiving unit 12 receives an instruction input for starting such a collation process
  • the image recognition unit 15, the collation unit 16, Collation processing by the correspondence information search unit 13 and the storage unit 11 and collation result output by the output unit 17 are performed.
  • the contents of collation processing by the image recognition unit 15, collation unit 16, correspondence information retrieval unit 13 and storage unit 11, and collation result output by the output unit 17 are the same as those in the first to fourth embodiments.
  • each of the plurality of identification marks is individually included in each of the plurality of specific frames F.
  • the worker refers to the “information indicating whether or not the image has been captured in the specified state” associated with each specific frame F, so that a plurality of stacked and stored in a plurality of stages are stored. Steel materials that have not yet been verified can be recognized.
  • the collation system 1 of this embodiment is different from the first to fifth embodiments in that it includes a terminal device and a server device that are configured to be able to communicate with each other by wire and / or wirelessly.
  • FIG. 19 shows an example of a functional block diagram of the verification system 1 of the present embodiment.
  • the terminal device 2 includes an input reception unit 12, an imaging unit 14, an output unit 17, and a terminal side transmission / reception unit 18.
  • the server device 3 includes a storage unit 11, a correspondence information search unit 13, an image recognition unit 15, a collation unit 16, and a server side transmission / reception unit 19.
  • the terminal device 2 and the server device 3 can communicate with each other via the terminal side transmission / reception unit 18 and the server side transmission / reception unit 19.
  • FIG. 20 shows another example of a functional block diagram of the verification system 1 of the present embodiment.
  • the terminal device 2 includes an input reception unit 12, an imaging unit 14, an image recognition unit 15, an output unit 17, and a terminal side transmission / reception unit 18.
  • the server device 3 includes a storage unit 11, a correspondence information search unit 13, a collation unit 16, and a server side transmission / reception unit 19. The terminal device 2 and the server device 3 can communicate with each other via the terminal side transmission / reception unit 18 and the server side transmission / reception unit 19.
  • the terminal-side transmitting / receiving unit 18 and the server-side transmitting / receiving unit 19 are configured to be able to communicate with each other by wire and / or wirelessly, and can transmit and receive data.
  • the terminal-side transmitting / receiving unit 18 may transmit only the image data of a partial image in the specific frame F in the image captured by the imaging unit 14 to the server device 3 (external device).
  • the terminal-side transmission / reception unit 18 includes information for identifying a partial image in the specific frame F in the image captured by the imaging unit 14 (eg, specific frame position information indicating the position of the specific frame F) and the image Data may be transmitted to the server device 3 (external device).
  • the terminal-side transmitting / receiving unit 18 includes information for identifying a partial image in the specific frame F in the image captured by the imaging unit 14 (for example, specific frame position information indicating the position of the specific frame F) and the image It is assumed that data is transmitted to the server device 3 (external device).
  • the input receiving unit 12 of the terminal device 2 receives the input of the address of the steel material to be verified through, for example, a user interface as shown in FIG. 11 (S10). Then, the terminal side transmission / reception unit 18 of the terminal device 2 transmits the address information to the server device 3 (S11). The server device 3 receives the address information via the server side transmission / reception unit 19. Thereafter, the correspondence information search unit 13 of the server device 3 searches the correspondence information (see FIG. 4) stored in the storage unit 11 using the address as a key, and the step information (steel material identification information) associated with the key. Are associated with each other) (S12). Then, the server side transmission / reception unit 19 of the server device 3 returns the acquired stage information to the terminal device 2 (S13). The terminal device 2 acquires the stage information via the terminal side transmission / reception unit 18. The stage information transmitted and received here may be associated with the address received in S11.
  • the output unit 17 of the terminal device 2 displays the acquired stage information on the display 100 as a list as shown in FIG.
  • the input reception part 12 receives the input which designates one or several step information from the said user interface (S14).
  • the terminal device 2 switches to the imaging mode. That is, the output unit 17 displays an image to be imaged on the display 100 as shown in FIG. Further, the output unit 17 displays the same number of specific frames F1 and F2 as the stage information specified in S14 on the display 100. Note that the circled numbers displayed in the upper left corner of each of the specific frames F1 and F2 indicate the stage information associated with each. Using such information, the operator can grasp how many steel material identification marks should be placed in each of the plurality of specific frames F1 and F2.
  • the operator adjusts the position, orientation, etc. of the verification system 1 to place a predetermined identification mark in each of the displayed one or more specific frames F.
  • the worker inputs an imaging instruction (for example, touching the shooting button) while maintaining the state.
  • the imaging unit 14 captures an image displayed on the display 100 (S15).
  • the imaging unit 14 associates information (specific frame position information) indicating the position of each of the one or more specific frames F displayed at the time of imaging with the image data of the captured image.
  • the stage information associated with each specific frame F is associated with each.
  • the terminal-side transmission / reception unit 18 of the terminal device 2 transmits the image data of the captured image to the server device 3 together with the specific frame position information and the step information that are associated with each other (S16).
  • the server device 3 receives the image data of the captured image, the specific frame position information, and the step information via the server side transmission / reception unit 19.
  • the image data, the specific frame position information, and the stage information transmitted / received here may be associated with the address associated with the stage information received in S13.
  • the image recognition unit 15 of the server device 3 performs image recognition processing using only a partial image within a specific frame in the image captured by the imaging unit 14.
  • the image recognition unit 15 extracts the identification mark written on the surface of each of the one or more steel materials, and recognizes the identification information using the extracted identification mark (S17).
  • the correspondence information search unit 13 searches the correspondence information in the storage unit 11 using the address and step information acquired in S16 as a key, and acquires one or a plurality of identification information.
  • the collation part 16 discriminate
  • the identification information with the matching step information is collated.
  • the server side transmission / reception unit 19 of the server device 3 returns the determination result of S18 to the terminal device 2 (S19).
  • the terminal device 2 acquires the determination result via the terminal side transmission / reception unit 18. Note that the determination result transmitted and received here may be associated with an address and stage information.
  • the output unit 17 of the terminal device 2 creates, for example, a user interface as shown in FIG. 15 and displays it on the display 100 (S20).
  • Storage means for storing correspondence information; Input accepting means for accepting input of the address of the steel material to be verified and the step information; Corresponding information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input receiving means; A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder, Imaging means for imaging an image displayed on the viewfinder; Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted.
  • Image recognition means for recognizing the identification information using the identification mark; Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means; A collation system. 2. In the verification system according to 1, The collating system in which the output means outputs a discrimination result of the collating means. 3. In the verification system according to 1 or 2, The output unit is a collation system that outputs a recognition result by the image recognition unit. 4).
  • the verification system can target a plurality of the steel materials stored at the same address at a time
  • the input accepting means can accept input of the address and a plurality of the step information in which a plurality of the steel materials to be collated are stored
  • the collating system in which the output means displays a plurality of the specific frames on the finder in the same number as the number of the stage information accepted by the input accepting means. 5.
  • Each of the plurality of specific frames is associated with each of the stage information received by the input receiving unit, The collation system, wherein the output means displays a plurality of the specific frames so that the associated stage information can be identified. 6).
  • the correspondence information search means acquires the stage information associated with the address at which the input reception means has received an input,
  • the output means displays a list of the stage information acquired by the correspondence information search means,
  • the said input reception means is a collation system which receives the input of the said step information of the said steel materials of collation object by receiving the input which selects one or more in the said step information currently displayed by the list. 7).
  • the verification system can target a plurality of the steel materials stored at the same address at a time,
  • the correspondence information search means is configured to associate the number of the first steel materials that are the steel materials stored at the address that the input reception means has accepted with the address in the correspondence information.
  • the said output means is the collation system which displays the said specific frame of the same number as the number of said 1st steel materials on the said finder. 8).
  • Each of the specific frames is associated with the step information associated with each of the first steel materials, The collation system, wherein the output means displays a plurality of the specific frames so that the associated stage information can be identified.
  • the said input reception means is a collation system which accepts the input of the said step information of the said steel materials of collation object by receiving the input which selects one or more in the said specific frame currently displayed on the said finder. 10.
  • the collation system which can change at least one of the display position, shape, and size in the finder individually for the plurality of specific frames displayed on the finder.
  • the input receiving means includes a designation input for designating a part of the plurality of specific frames displayed on the finder, and an imaging instruction input for imaging in a state where some of the specific frames are designated.
  • the imaging unit images in accordance with the imaging instruction input received by the input receiving unit,
  • the collation system in which the image recognition means performs an image recognition process using only a partial image within the specific frame designated at the time of image capture of the image captured by the image capture means.
  • the output unit displays the specific frame that has been imaged in a specified state and the specific frame that has not been imaged in a specified state in a distinguishable manner.
  • the verification system includes a terminal device configured to be able to communicate with each other, and a server device,
  • the terminal device includes the input receiving unit, the output unit, and the imaging unit.
  • the server device includes the storage unit, the correspondence information search unit, and the collation unit, A verification system in which either the terminal device or the server device includes the image recognition means.
  • a terminal device comprising the input receiving unit, the output unit, and the imaging unit included in the collation system according to any one of 1 to 12. 15.
  • a viewfinder displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image
  • Output means for displaying on the viewfinder, Imaging means for capturing the image displayed on the viewfinder;
  • a transmission unit configured to transmit only a partial image within the specific frame in the image captured by the imaging unit to an external device.
  • a viewfinder displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image
  • Output means for displaying on the viewfinder, Imaging means for capturing the image displayed on the viewfinder;
  • a terminal device comprising: transmission means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means.
  • a server apparatus comprising: the storage unit included in the verification system according to any one of 1 to 12, the correspondence information search unit, and the verification unit. 18. The server apparatus according to claim 17, further comprising the image recognition means included in the collation system according to any one of 1 to 12. 19.
  • a program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder, Computer A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder.
  • a program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder, Computer A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder.
  • a program for a collation system that collates a plurality of steel materials that are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address, Computer
  • the identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other.
  • Storage means for storing correspondence information; An input receiving means for receiving the address of the steel material to be verified and the input of the step information; Correspondence information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input reception means; A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image.
  • Output means Imaging means for capturing the image displayed on the viewfinder; Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted.
  • Image recognition means for recognizing the identification information using the identification mark; Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means; Program to function as. 21-2. 21.
  • the input receiving means is configured to accept input of the address and a plurality of the step information in which a plurality of the steel materials to be verified are stored, A program for causing the output unit to display the plurality of specific frames on the finder in the same number as the number of the stage information received by the input receiving unit.
  • 21-5 In the program described in 21-4, Associating each of the plurality of specific frames with each of the stage information received by the input receiving means; A program that causes the output means to display a plurality of the specific frames so that the associated stage information can be identified. 21-6.
  • the computer A program for causing a plurality of the specific frames displayed on the finder to individually function as means for changing at least one of a display position, a shape, and a size in the finder. 21-11.
  • the identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other.
  • a pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image.
  • Image recognition processing is performed using only a partial image in the specific frame in the image captured in the imaging step, and identification marks written on the surfaces of the plurality of steel materials are extracted and extracted.
  • the verification method can target a plurality of the steel materials stored at the same address at a time,
  • the input receiving step it is possible to receive input of the address and a plurality of the step information in which a plurality of the steel materials to be verified are stored
  • the output step a collation method for displaying the plurality of specific frames on the finder in the same number as the number of the step information received in the input receiving step. 22-5.
  • Each of the plurality of specific frames is associated with each of the step information received in the input receiving step,
  • a plurality of the specific frames are displayed so that the associated step information can be identified. 22-6.
  • the step information associated with the address that has received the input in the input reception step is acquired,
  • the stage information acquired in the correspondence information search step is displayed as a list,
  • a verification method for receiving input of the stage information of the steel material to be verified by receiving an input for selecting one or more of the stage information displayed in the list. 22-7.
  • the verification method can target a plurality of the steel materials stored at the same address at a time,
  • the correspondence information search step the number of the first steel material that is the steel material stored in the address that has received the input in the input reception step is associated with the address in the correspondence information.
  • Each of the specific frames is associated with the step information associated with each of the first steel materials
  • a plurality of the specific frames are displayed so that the associated step information can be identified. 22-9.
  • a collation method for accepting an input of the step information of the steel material to be collated by receiving an input for selecting one or a plurality of the specific frames displayed on the finder. 22-10.
  • the input receiving step a designation input for designating a part of the plurality of specific frames displayed on the finder, and an imaging instruction input for imaging in a state where some of the specific frames are designated Accept
  • the imaging step imaging is performed according to the imaging instruction input received in the input reception step
  • the image recognition step a collation method in which image recognition processing is performed using only a partial image within the specific frame specified at the time of image capturing in the image captured in the image capturing step. 22-12.
  • the output step the specific frame that has been imaged in the designated state and the specific frame that has not been imaged in the designated state are displayed in a distinguishable manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Collating Specific Patterns (AREA)

Abstract

This comparison system (1) has: a corresponding information search unit (13) that acquires, from corresponding information, identification information associated with level information and the address of a steel member that is the subject of comparison of which the input has been received; an output unit (17) that displays an image in a finder and displays a specific frame in an overlapping manner; an image capture unit (14) that captures the image displayed at the finder; an image recognition unit (15) that, using only a partial image within the specific frame in the captured image, performs an image recognition process to extract an identifying mark notated at the surface of a steel member, and identifies identification information using the extracted identifying mark; and a comparison unit (16) that determines whether or not the identification information acquired by the corresponding information search unit (13) and the identification information recognized by the image recognition unit (15) match.

Description

照合システム、端末装置、サーバ装置、照合方法及びプログラムVerification system, terminal device, server device, verification method and program
 本発明は、照合システム、端末装置、サーバ装置、照合方法及びプログラムに関する。 The present invention relates to a collation system, a terminal device, a server device, a collation method, and a program.
 鉄鋼製品の製造においては、鋼材の半製品として、その形状によりスラブ(厚板・薄板上に加工されたもの)、ピレット(円柱または角柱形状のもの)、ブルーム(羊羹状のもの)、ビームブランク(H字型に近い形のもの)などのものがある。これらスラブ等の鋼材は、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される。管理者は、どの番地の何段目にどの識別情報(例:ロットナンバー)の鋼材が保管されているかを示す保管情報を用いて、複数の鋼材の保管状態を管理する。保管場所に新たな鋼材が追加されたり、保管している鋼材が出荷されたり、保管する番地を他の番地に移動したりというイベントが発生すると、保管情報は更新される。このような保管情報を用いれば、ある識別情報の鋼材を出荷する際、その鋼材の保管場所を容易に特定することができる。 In the manufacture of steel products, as semi-finished steel products, slabs (thick or thin plate processed), billets (cylindrical or prismatic), blooms (shallow-shaped), beam blanks (A shape close to an H-shape). These steel materials such as slabs are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address. The manager manages the storage state of a plurality of steel materials by using storage information indicating which identification information (eg, lot number) of steel material is stored at which address and what level. The storage information is updated when an event occurs in which a new steel material is added to the storage location, the stored steel material is shipped, or the storage address is moved to another address. If such storage information is used, when a steel material having certain identification information is shipped, the storage location of the steel material can be easily specified.
 しかし、人為的ミス等の何らかの不具合により、保管情報に誤りが含まれることを完全には排除できない。そこで、出荷時には、保管情報を用いて特定した場所に実際に保管されている鋼材の表面等に付された(例:印字された)識別情報と、出荷対象の鋼材の識別情報とが一致しているかを確認する作業(照合作業)が行われる。また、所定のタイミング(例:棚卸タイミング)で、所定の位置に保管されている鋼材の表面等に付された(例:印字された)識別情報と、保管情報とを利用して照合作業を行い、保管情報に誤りがないか確認する。 However, it cannot be completely excluded that the stored information contains errors due to some kind of malfunction such as human error. Therefore, at the time of shipment, the identification information attached to the surface of the steel material actually stored in the location specified using the storage information (eg, printed) matches the identification information of the steel material to be shipped. Work (verification work) to confirm whether or not In addition, at the specified timing (example: inventory timing), collation work is performed using the identification information (eg: printed) attached to the surface of the steel material stored at the specified position and the storage information. Check that the stored information is correct.
 特許文献1には、融資希望の顧客が融資審査装置に送信する証明書類の画像データを取得するための証明書類撮影用カメラであって、前記証明書類における各記載項目を区分するための枠線のうち、少なくとも一部の枠線の合致する撮影フレームを、ファインダーに表示させる撮影フレーム表示手段を有することを特徴とする証明書類撮影用カメラが開示されている。 Patent Document 1 discloses a camera for photographing certificates for acquiring image data of certificates transmitted by a customer who wants to loan to a loan examination apparatus, and a frame line for dividing each description item in the certificates Among them, there is disclosed a certificate photographing camera characterized by having photographing frame display means for displaying a photographing frame that matches at least a part of the frame line on a finder.
特開2012-74804号公報JP 2012-74804 A
 従来、上述のような出荷時及び所定のタイミングでの照合作業は、人手で行われていた。すなわち、作業者が保管情報と、所定の位置に保管されている鋼材の表面等に付された(例:印字された)識別情報とを視認で照合していた。このような作業は非常に手間であり、多くの時間を要する。また、人為的ミスが発生する可能性がある。 Conventionally, collation work at the time of shipment and at a predetermined timing as described above has been performed manually. That is, the operator visually compares storage information with identification information (for example, printed) attached to the surface of a steel material stored at a predetermined position. Such an operation is very troublesome and requires a lot of time. In addition, human error may occur.
 本願発明は、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管された鋼材の照合作業を効率的に行うための技術を提供することを課題とする。 This invention makes it a subject to provide the technique for performing efficiently the collation operation | work of the steel materials piled up and stored in each of the several area where each address was allocated in multiple steps.
 本発明によれば、
 各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合システムであって、
 保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する記憶手段と、
 照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付手段と、
 前記対応情報を参照し、前記入力受付手段が入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索手段と、
 ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
 前記ファインダーに表示されている前記画像を撮像する撮像手段と、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識手段と、
 前記対応情報検索手段が取得した前記識別情報と、前記画像認識手段が認識した前記識別情報とが一致するか否か判別する照合手段と、
を有する照合システムが提供される。
According to the present invention,
A collation system for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned an address,
The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Storage means for storing correspondence information;
Input accepting means for accepting input of the address of the steel material to be verified and the step information;
Corresponding information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input receiving means;
A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
Imaging means for capturing the image displayed on the viewfinder;
Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark;
Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means;
A verification system is provided.
 また、本発明によれば、前記照合システムが有する前記入力受付手段と、前記出力手段と、前記撮像手段とを備える端末装置が提供される。 Further, according to the present invention, there is provided a terminal device including the input receiving unit, the output unit, and the imaging unit included in the verification system.
 また、本発明によれば、
 ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
 前記ファインダーに表示されている前記画像を撮像する撮像手段と、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを外部装置に送信する送信手段と、を有する端末装置が提供される。
Moreover, according to the present invention,
A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
Imaging means for capturing the image displayed on the viewfinder;
There is provided a terminal device comprising: a transmission unit that transmits only a partial image within the specific frame in the image captured by the imaging unit to an external device.
 また、本発明によれば、
 ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
 前記ファインダーに表示されている前記画像を撮像する撮像手段と、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像を識別する情報とともに、前記画像を外部装置に送信する送信手段と、を有する端末装置が提供される。
Moreover, according to the present invention,
A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
Imaging means for capturing the image displayed on the viewfinder;
There is provided a terminal device comprising: transmission means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means.
 また、本発明によれば、前記照合システムが有する前記記憶手段と、前記対応情報検索手段と、前記照合手段とを備えるサーバ装置が提供される。 Further, according to the present invention, there is provided a server device comprising the storage means, the correspondence information search means, and the collation means that the collation system has.
 また、本発明によれば、
 ファインダーに表示されている画像を撮像する撮像手段を備えた端末装置用のプログラムであって、
 コンピュータを、
 前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを外部装置に送信する送信手段、
として機能させるためのプログラムが提供される。
Moreover, according to the present invention,
A program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder,
Computer
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder. Output means,
Transmitting means for transmitting only a partial image within the specific frame of the image captured by the imaging means to an external device;
A program for functioning as a server is provided.
 また、本発明によれば、
 ファインダーに表示されている画像を撮像する撮像手段を備えた端末装置用のプログラムであって、
 コンピュータを、
 前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像を識別する情報とともに、前記画像を外部装置に送信する送信手段、
として機能させるためのプログラムが提供される。
Moreover, according to the present invention,
A program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder,
Computer
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder. Output means,
Transmitting means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means;
A program for functioning as a server is provided.
 また、本発明によれば、
 各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合システム用のプログラムであって、
 コンピュータを、
 保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する記憶手段、
 照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付手段、
 前記対応情報を参照し、前記入力受付手段が入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索手段、
 ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
 前記ファインダーに表示されている前記画像を撮像する撮像手段、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識手段、
 前記対応情報検索手段が取得した前記識別情報と、前記画像認識手段が認識した前記識別情報とが一致するか否か判別する照合手段、
として機能させるためのプログラムが提供される。
Moreover, according to the present invention,
A program for a collation system that collates a plurality of steel materials that are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address,
Computer
The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Storage means for storing correspondence information;
An input receiving means for receiving the address of the steel material to be verified and the input of the step information;
Correspondence information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input reception means;
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image. Output means,
Imaging means for capturing the image displayed on the viewfinder;
Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark;
Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means;
A program for functioning as a server is provided.
 また、本発明によれば、
 各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合方法であって、
 コンピュータが、
 保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶しておき、
 照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付ステップと、
 前記対応情報を参照し、前記入力受付ステップで入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索ステップと、
 ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力ステップと、
 前記ファインダーに表示されている前記画像を撮像する撮像ステップと、
 前記撮像ステップで撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識ステップと、
 前記対応情報検索ステップで取得した前記識別情報と、前記画像認識ステップで認識した前記識別情報とが一致するか否か判別する照合ステップと、
を実行する照合方法が提供される。
Moreover, according to the present invention,
A collation method for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned with an address,
Computer
The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Remember the correspondence information,
An input receiving step of receiving input of the address of the steel material to be verified and the step information;
A correspondence information search step of referring to the correspondence information and acquiring the identification information associated with the address and the step information received in the input acceptance step;
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image. An output step;
An imaging step of imaging the image displayed on the viewfinder;
Image recognition processing is performed using only a partial image in the specific frame in the image captured in the imaging step, and identification marks written on the surfaces of the plurality of steel materials are extracted and extracted. An image recognition step for recognizing the identification information using the identification mark;
A collation step for determining whether or not the identification information acquired in the correspondence information search step matches the identification information recognized in the image recognition step;
Is provided.
 本発明によれば、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管された鋼材の照合作業を効率的に行うことが可能となる。 According to the present invention, it is possible to efficiently perform a collation operation of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address.
 上述した目的、およびその他の目的、特徴および利点は、以下に述べる好適な実施の形態、およびそれに付随する以下の図面によってさらに明らかになる。 The above-described object and other objects, features, and advantages will be further clarified by a preferred embodiment described below and the following drawings attached thereto.
本実施形態の照合システムの機能ブロック図の一例を示す図である。It is a figure which shows an example of the functional block diagram of the collation system of this embodiment. 鋼材を保管するエリアの例を説明するための図である。It is a figure for demonstrating the example of the area which stores steel materials. 複数段に積み重ねて保管された鋼材の例を示す図である。It is a figure which shows the example of the steel materials stacked | piled up and stored in the multistage. 本実施形態の対応情報の一例を示す図である。It is a figure which shows an example of the correspondence information of this embodiment. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の照合方法の処理の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of a process of the collation method of this embodiment. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の出力部が表示するユーザインターフェースの一例を示す図である。It is a figure which shows an example of the user interface which the output part of this embodiment displays. 本実施形態の照合システムの機能ブロック図の一例を示す図である。It is a figure which shows an example of the functional block diagram of the collation system of this embodiment. 本実施形態の照合システムの機能ブロック図の一例を示す図である。It is a figure which shows an example of the functional block diagram of the collation system of this embodiment. 本実施形態の照合方法の処理の流れの一例を示すシーケンス図である。It is a sequence diagram which shows an example of the flow of a process of the collation method of this embodiment.
 以下、本発明の実施の形態について図面を用いて説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 なお、本実施形態のシステム及び装置は、任意のコンピュータのCPU(Central Processing Unit)、メモリ、メモリにロードされたプログラム(あらかじめ装置を出荷する段階からメモリ内に格納されているプログラムのほか、CD(Compact Disc)等の記憶媒体やインターネット上のサーバ等からダウンロードされたプログラムも含む)、そのプログラムを格納するハードディスク等の記憶ユニット、ネットワーク接続用インタフェイスを中心にハードウェアとソフトウェアの任意の組合せによって実現される。そして、その実現方法、装置にはいろいろな変形例があることは、当業者には理解されるところである。 Note that the system and apparatus of this embodiment include a CPU (Central Processing Unit), a memory, and a program loaded in the memory (a program stored in the memory in advance from the stage of shipping the apparatus, a CD (Including programs downloaded from storage media such as (Compact Disc), servers on the Internet, etc.), storage units such as hard disks that store the programs, and any combination of hardware and software, centering on the network connection interface It is realized by. It will be understood by those skilled in the art that there are various modifications to the implementation method and apparatus.
 また、本実施形態の説明において利用する機能ブロック図は、ハードウェア単位の構成ではなく、機能単位のブロックを示している。これらの図においては、各装置は1つの機器により実現されるよう記載されているが、その実現手段はこれに限定されない。すなわち、物理的に分かれた構成であっても、論理的に分かれた構成であっても構わない。 Further, the functional block diagram used in the description of the present embodiment shows functional unit blocks, not hardware unit configurations. In these drawings, each device is described as being realized by one device, but the means for realizing it is not limited to this. That is, it may be a physically separated configuration or a logically separated configuration.
<第1の実施形態>
 本発明者らは、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管された鋼材の照合作業をコンピュータで実現する技術を検討した。まず、どの番地の何段目にどの識別情報(ロットナンバー等)の鋼材が保管されているかを示す保管情報を電子データで管理する。そして、照合作業には、画像認識技術を利用する。具体的には、まず、照合対象の鋼材の表面等に付された(例:印字された)識別マークを撮像装置で撮像する。そして、画像認識技術を利用して撮像した画像中から識別マークを抽出するとともに、当該識別マークを利用して識別情報を認識する。また、照合対象の鋼材の位置(番地及び何段目かを示す情報)をキーとして上記保管情報を検索し、当該キーに対応付けられている識別情報を取得する。そして、画像認識技術を利用して認識した識別情報と、保管情報から取得した識別情報とが一致するか否かを照合する。このような技術によれば、人為的ミスを排除することができる。しかし、当該技術の場合、鋼材の照合作業に特有の以下のような課題が発生し得る。
<First Embodiment>
The inventors of the present invention have studied a technique for realizing, with a computer, verification work of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address. First, storage information indicating which identification information (lot number or the like) of steel material is stored at which address and what level is managed by electronic data. Then, image recognition technology is used for the collation work. Specifically, first, an image of an identification mark (for example, printed) attached to the surface of a steel material to be verified is imaged. Then, an identification mark is extracted from an image captured using an image recognition technique, and identification information is recognized using the identification mark. Further, the storage information is searched using the position of the steel material to be verified (information indicating the address and the number of steps) as a key, and the identification information associated with the key is acquired. Then, it is verified whether the identification information recognized using the image recognition technology matches the identification information acquired from the storage information. According to such a technique, human error can be eliminated. However, in the case of this technique, the following problems peculiar to the collation work of steel materials may occur.
 鋼材の撮像は、鋼材の保管場所で行う必要がある。すなわち、撮像のために鋼材を移動させることはできない。鋼材の保管場所は様々であり、屋外に保管される場合もあれば、屋内に保管される場合もある。例えば、鋼材は、低照度の環境下等、撮像用としては好ましくない環境下で保管される場合もある。かかる場合、撮像された画像をそのまま用いて画像認識処理を行うと、十分な認識精度を得られなくなる恐れがある。当該課題を解決する手段として、カメラ側の設定を各環境に適するように変更し、鋼材を撮像する手段が考えられる。しかし、多忙な現場の作業者は、このような面倒なカメラの設定の変更作業を回避することを望む。また、カメラの設定変更に時間を要し、照合作業全体の作業効率は悪くなる。 Steel imaging needs to be done at the steel storage location. That is, the steel material cannot be moved for imaging. There are various storage locations for steel materials, which may be stored outdoors or indoors. For example, the steel material may be stored in an environment that is not preferable for imaging, such as in a low illuminance environment. In such a case, if the image recognition process is performed using the captured image as it is, there is a possibility that sufficient recognition accuracy cannot be obtained. As a means for solving the problem, a means for imaging the steel material by changing the setting on the camera side to be suitable for each environment can be considered. However, busy workers in the field want to avoid such troublesome changes in camera settings. In addition, it takes time to change the camera settings, and the work efficiency of the entire collation work deteriorates.
 このため、認証精度を向上させるためには、様々な画像認識技術(補正処理等)を利用して画像認識処理の精度を高める必要がある。かかる場合、画像認識処理に要する処理時間が大きくなり、照合作業全体の作業効率は悪くなる。コンピュータ処理の待ち時間が数秒違うだけでも、現場の作業者に与える心的影響は大きい。 Therefore, in order to improve the authentication accuracy, it is necessary to improve the accuracy of the image recognition processing using various image recognition technologies (correction processing, etc.). In such a case, the processing time required for the image recognition process is increased, and the work efficiency of the entire collation work is deteriorated. Even if the waiting time for computer processing is different by a few seconds, the mental impact on the workers on site is great.
 そこで、本発明者らは、現場の作業者に面倒な操作を要することなく、十分な作業効率が実現される認証処理技術を発明した。以下、詳細に説明する。 Therefore, the present inventors have invented an authentication processing technique that realizes sufficient work efficiency without requiring troublesome operations on site workers. Details will be described below.
 図1に、本実施形態の照合システム1の機能ブロック図の一例を示す。図示するように、本実施形態の照合システム1は、記憶部11と、入力受付部12と、対応情報検索部13と、撮像部14と、画像認識部15と、照合部16と、出力部17とを有する。本実施形態の照合システム1は、1つの装置(例:携帯端末装置)により実現されてもよいし、有線及び/又は無線で互いに通信可能に構成された2つ以上の装置により実現されてもよい。すなわち、1つの装置が図1に示すすべての部を備えてもよい。または、2つ以上の装置各々が図1に示す部の少なくとも一部を備え、それらを組み合わせることで図1に示すすべての部を備える照合システム1が実現されてもよい。2つ以上の装置で照合システム1を実現する実施形態は、以下の実施形態で説明する。 FIG. 1 shows an example of a functional block diagram of the matching system 1 of the present embodiment. As shown in the figure, the collation system 1 of the present embodiment includes a storage unit 11, an input reception unit 12, a correspondence information search unit 13, an imaging unit 14, an image recognition unit 15, a collation unit 16, and an output unit. 17. The collation system 1 of the present embodiment may be realized by a single device (for example, a mobile terminal device), or may be realized by two or more devices configured to be able to communicate with each other by wire and / or wirelessly. Good. That is, one apparatus may include all the units shown in FIG. Alternatively, each of the two or more devices may include at least a part of the units illustrated in FIG. 1, and the matching system 1 including all the units illustrated in FIG. 1 may be realized by combining them. An embodiment in which the verification system 1 is realized by two or more devices will be described in the following embodiment.
 本実施形態の照合システム1は、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行うためのシステムである。 The collation system 1 of this embodiment is a system for collating a plurality of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address.
 図2に、各々番地が割り振られた複数のエリアの一例を示す。AA1乃至AF4が番地である。なお、複数のエリアは必ずしも図示するように規則正しく配列されている必要はなく、無造作な位置関係であってもよい。また、複数のエリア各々は互いに離れて位置してもよい。例えば、あるエリアは屋内にあり、他のエリアは屋外にあるという風に、環境が互いに異なっていてもよい。また、図示するエリアの形状は一例であり、四角に限定されない。 FIG. 2 shows an example of a plurality of areas each assigned an address. AA1 to AF4 are addresses. Note that the plurality of areas do not necessarily have to be regularly arranged as shown in the figure, and may have a random positional relationship. Further, each of the plurality of areas may be located away from each other. For example, the environments may be different from each other, with some areas being indoors and other areas being outdoors. Moreover, the shape of the area to show in figure is an example, and is not limited to a square.
 図3に、複数段に積み重ねて保管されている複数の鋼材の一例を示す。図示する例では、板状のスラブが5段に積み重ねられている。本実施形態において鋼材の形状は特段制限されない。鋼材の形状は図示するような板状であってもよいし、角材状、棒状等、その他の形状であってもよい。しかし、同じ種類の鋼材(形状、大きさ、成分等が同じ)を互いに積み重ねるのが好ましい。互いに積み重ねる鋼材の数は設計的事項である。 Fig. 3 shows an example of a plurality of steel materials that are stacked and stored in a plurality of stages. In the illustrated example, plate-like slabs are stacked in five stages. In the present embodiment, the shape of the steel material is not particularly limited. The shape of the steel material may be a plate shape as shown in the figure, or may be other shapes such as a square shape and a rod shape. However, it is preferable to stack the same type of steel materials (the same shape, size, components, etc.) on each other. The number of steels stacked on top of each other is a design matter.
 なお、図3に示す例では、複数の鋼材が互いに接した状態で積層されているが、各鋼材の間に中間部材を挟んで積層してもよい。例えば、各鋼材が所定の載置部材(例:載置台)の上に載置されており、この載置部材ごと複数の鋼材が積層されていてもよい。 In the example shown in FIG. 3, a plurality of steel materials are stacked in contact with each other, but may be stacked with an intermediate member interposed between the steel materials. For example, each steel material may be mounted on a predetermined mounting member (for example, mounting table), and a plurality of steel materials may be laminated together with the mounting member.
 図3に示すように、各鋼材の表面には各々の識別情報を示す識別マークが記される。例えば、機械で各鋼材の表面に識別マークが印字されていてもよい。または、識別マークを記したラベル(例:コンピュータで生成した識別マークを印刷したラベル)が各鋼材の表面に貼付されていてもよい。識別マークは、従来の画像認識技術で認識可能なあらゆる形態を採用することができる。例えば、識別マークは、図示するように英数字等で構成された識別情報そのものであってもよいし、バーコードや二次元コードなどであってもよい。図示する例では一列に英数字からなる識別情報が記載されているが、二列及び三列に分けて記載してもよい。鋼材の表面形状に応じてこのような記載形態を変更することができる。しかし、同じ種類の鋼材(形状、大きさ、成分等が同じ)には同じ記載形態を採用するのが好ましい。 As shown in FIG. 3, an identification mark indicating each identification information is written on the surface of each steel material. For example, an identification mark may be printed on the surface of each steel material by a machine. Or the label (for example, the label which printed the identification mark produced with the computer) which described the identification mark may be stuck on the surface of each steel material. Any form that can be recognized by a conventional image recognition technique can be adopted as the identification mark. For example, the identification mark may be identification information itself made up of alphanumeric characters or the like as shown in the figure, or may be a barcode or a two-dimensional code. In the example shown in the figure, identification information consisting of alphanumeric characters is described in one column, but may be described in two columns and three columns. Such a description form can be changed according to the surface shape of steel materials. However, it is preferable to adopt the same description form for the same type of steel materials (the same shape, size, components, etc.).
 図1に示す記憶部11は、保管されている複数の鋼材各々の識別情報と、鋼材各々が保管されているエリアの番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する。段情報は、例えば、下から数えて何段目かを示す情報であってもよいし、または、上から数えて何段目かを示す情報であってもよい。以下では、段情報は、下から数えて何段目かを示す情報であるものとする。図4に対応情報の一例を示す。図示する対応情報によれば、AA1番地の下から数えて1段目には識別情報「20130101AB001」の鋼材が保管されていることが分かる。対応情報は、保管する鋼材が新たに追加されたり、保管している鋼材が出荷されたり、保管する番地を他の番地に移動したりというイベントが発生すると内容が更新される。例えば、人間の入力作業に基づいて対応情報の内容が更新されてもよい。 The storage unit 11 shown in FIG. 1 includes identification information for each of a plurality of stored steel materials, an address of an area in which each of the steel materials is stored, and stage information indicating positions in a steel material group stacked in a plurality of stages. Is stored in correspondence information. The stage information may be, for example, information indicating the number of stages counted from the bottom, or information indicating the number of stages counted from the top. In the following, it is assumed that the stage information is information indicating the number of stages counted from the bottom. FIG. 4 shows an example of correspondence information. According to the correspondence information shown in the figure, it can be seen that the steel material of the identification information “20130101AB001” is stored in the first row from the bottom of the AA1 address. The correspondence information is updated when an event occurs in which a steel material to be stored is newly added, a steel material to be stored is shipped, or an address to be stored is moved to another address. For example, the content of the correspondence information may be updated based on a human input operation.
 図1に戻り、入力受付部12は、照合対象の鋼材の番地及び段情報の入力を受付ける。例えば、作業者は、ある番地のある段数に保管されている鋼材を出荷する際、この鋼材を照合対象とする。そして、作業者は、その番地と、その段数を示す段情報とを入力する。その他、作業者は、例えば棚卸等のタイミングで記憶部11が記憶する対応情報に誤りが存在しないか確認するために、複数の鋼材を順に照合対象としてもよい。 Referring back to FIG. 1, the input receiving unit 12 receives the input of the steel material address and step information to be verified. For example, when an operator ships a steel material stored at a certain number of steps at a certain address, this steel material is used as a verification target. Then, the worker inputs the address and stage information indicating the number of stages. In addition, the operator may sequentially check a plurality of steel materials in order to check whether there is an error in the correspondence information stored in the storage unit 11 at a timing such as inventory.
 図5に、入力受付部12が照合対象の鋼材の番地及び段情報の入力を受付けるためのユーザインターフェースの一例を示す。図示する例では、プルダウンメニューにより、番地及び段情報が選択可能になっている。なお、図示するユーザインターフェースの例はあくまで一例であり、これに限定されない(当該前提は、以下に示すすべてのユーザインターフェースにおいて同様)。例えば、その他のGUI(graphical user interface)部品を用いたユーザインターフェースとしてもよい。入力受付部12が入力を受付ける手段は特段制限されず、タッチパネルディスプレイ、入力ボタン、マイク、キーボード、マウス等のあらゆる入力装置を用いて実現することができる。 FIG. 5 shows an example of a user interface for the input receiving unit 12 to receive input of the steel material address and step information to be verified. In the example shown in the figure, an address and stage information can be selected by a pull-down menu. Note that the example of the user interface shown in the figure is merely an example, and the present invention is not limited to this (the premise is the same for all user interfaces shown below). For example, a user interface using other GUI (graphical user interface) components may be used. The means by which the input receiving unit 12 receives an input is not particularly limited, and can be realized using any input device such as a touch panel display, an input button, a microphone, a keyboard, and a mouse.
 図1に戻り、対応情報検索部13は、記憶部11に記憶されている対応情報(図4参照)を参照し、入力受付部12が入力を受付けた番地及び段情報に対応付けられている鋼材の識別情報を取得する。 Returning to FIG. 1, the correspondence information search unit 13 refers to the correspondence information (see FIG. 4) stored in the storage unit 11, and is associated with the address and stage information at which the input reception unit 12 has accepted the input. Get the identification information of steel.
 出力部17は、ファインダーを有する。ファインダーは例えばディスプレイで構成することができる。ファインダーは、例えばタッチパネルディスプレイであってもよい。ファインダーには、以下で説明する撮像部14により撮像される画像(撮像前の画像)及び/又は撮像部14により撮像された画像(撮像済みの画像)が表示される。例えば、ファインダーに画像が表示されている状態で撮像指示入力がなされると、ファインダーに表示されている画像が撮像され、撮像データが記憶される。また、記憶されている撮像データを用いて、ファインダーに撮像済みの画像が表示される。 The output unit 17 has a finder. The viewfinder can be composed of a display, for example. The viewfinder may be a touch panel display, for example. An image captured by the imaging unit 14 described below (an image before imaging) and / or an image captured by the imaging unit 14 (an image captured) is displayed on the finder. For example, when an imaging instruction is input while an image is displayed on the finder, the image displayed on the finder is captured and imaging data is stored. In addition, a captured image is displayed on the viewfinder using the stored imaging data.
 出力部17は、ファインダーに表示されている画像の中の画像認識処理の対象となる一部領域を示す特定フレームを画像に重ねてファインダーに表示する。画像認識処理は、以下で説明する画像認識部15により実行される画像認識処理が該当する。なお、出力部17は、(1)撮像前の画像に特定フレームを重ねて表示する処理、及び、(2)撮像済みの画像に特定フレームを重ねて表示する処理の少なくとも一方を実行可能に構成される。以下では、出力部17は、(1)撮像前の画像に特定フレームを重ねて表示する処理を実行するものとして説明する。 The output unit 17 displays on the viewfinder a specific frame indicating a partial area to be subjected to image recognition processing in the image displayed on the viewfinder. The image recognition process corresponds to an image recognition process executed by the image recognition unit 15 described below. The output unit 17 is configured to be capable of executing at least one of (1) a process of displaying a specific frame superimposed on an image before imaging, and (2) a process of displaying a specific frame superimposed on a captured image. Is done. Below, the output part 17 demonstrates as what performs the process which superimposes and displays a specific frame on the image before imaging (1).
 出力部17が特定フレームを画像に重ねてディスプレイ(ファインダー)に表示している一例を図6に示す。図示するディスプレイ100には、撮像前の画像として、複数段に積み重ねて保管されている鋼材(図3参照)の一部が表示されている。より具体的には、複数段に積み重ねて保管されている鋼材の識別マーク部分が表示されている。そして、ディスプレイ100には、当該画像に重ねて特定フレームFが表示されている。図示する状態で入力受付部12が撮像指示入力(例:撮影ボタンのタッチ)を受け付けると、ディスプレイ100に表示されている画像が撮像される。しかし、画像認識処理の対象はディスプレイ100に表示されている全ての画像でなく、特定フレームF内の画像のみとなる。 FIG. 6 shows an example in which the output unit 17 displays a specific frame superimposed on an image on a display (finder). In the illustrated display 100, a part of steel materials (see FIG. 3) stored in a plurality of stages is displayed as an image before imaging. More specifically, the identification mark part of the steel materials stacked and stored in a plurality of stages is displayed. On the display 100, the specific frame F is displayed over the image. When the input receiving unit 12 receives an imaging instruction input (e.g., a touch of a shooting button) in the state illustrated, an image displayed on the display 100 is captured. However, the target of the image recognition process is not all the images displayed on the display 100 but only the image in the specific frame F.
 特定フレームFの形状は四角に限定されず、その他の形状であってもよい。また、特定フレームFの大きさ、形状及びディスプレイ100内の表示位置の少なくとも1つは作業者の入力に応じて変更できてもよい。変更する方法は、例えば、複数点を認識可能なタッチパネル型ディスプレイの場合において大きさを変更するときにはピンチイン・ピンチアウトなどの手法によるものがあり、表示位置を変更するときには特定フレーム内の任意の位置をタッチしてドラッグするなどの方法によるものがある。または一点しか認識できないタッチパネル型ディスプレイの場合において大きさを変更するときには特定フレームを構成する一辺または二辺の交点をタッチしドラッグするなどの方法(タッチアンドスライド)によるものがあり、表示位置を変更するときには特定フレーム内の任意の位置をタッチしてドラッグするなどの方法(タッチアンドスライド)によるものがある。またはタッチパネル型でないディスプレイの場合は所定のボタンによるものがある。なお、識別マークを機械で鋼材に印字したり、また、コンピュータで生成した識別マークを印刷したラベルを鋼材に貼付する場合は、識別マークの形態(1列書き、2列書き、識別マークの形状、識別マークの表示領域の形状等。)がバラツクことなく所定の状態に統一される。このため、予め、その識別マークの形態に適した形状の特定フレームFを出力部17に保持させておいてもよい。 The shape of the specific frame F is not limited to a square and may be other shapes. In addition, at least one of the size and shape of the specific frame F and the display position in the display 100 may be changed according to the input of the operator. For example, in the case of a touch panel type display capable of recognizing a plurality of points, there is a method of changing by a method such as pinch-in / pinch-out when changing the size, and an arbitrary position within a specific frame when changing the display position There are methods such as touching and dragging. Or, in the case of a touch panel type display that can recognize only one point, there is a method (touch and slide) such as touching and dragging the intersection of one side or two sides constituting a specific frame to change the display size. When doing, there is a method (touch and slide) such as touching and dragging an arbitrary position in a specific frame. Alternatively, in the case of a display that is not a touch panel type, there is a display using a predetermined button. In addition, when the identification mark is printed on the steel with a machine, or when the label printed with the computer-generated identification mark is affixed to the steel, the form of the identification mark (one line writing, two lines writing, the shape of the identification mark) , The shape of the display area of the identification mark, etc.) is unified in a predetermined state without variation. For this reason, a specific frame F having a shape suitable for the form of the identification mark may be held in the output unit 17 in advance.
 図1に戻り、撮像部14は、ディスプレイ100に表示されている画像を撮像する。作業者は、照合対象の鋼材の識別マークが特定フレームF内に収まっている状態で、撮像指示入力を行う。照合対象以外の鋼材の識別マークは特定フレームF内に含まれない状態で撮像指示入力を行うのが好ましい。撮像部14は、撮像した画像の画像データに、撮像時点の特定フレームの位置(例:画像データ内の位置、ディスプレイ100内の位置)を示す情報(特定フレーム位置情報)を対応付ける。 Returning to FIG. 1, the imaging unit 14 captures an image displayed on the display 100. The operator inputs an imaging instruction in a state where the identification mark of the steel material to be collated is within the specific frame F. It is preferable to input an imaging instruction in a state where the identification marks of the steel materials other than the verification target are not included in the specific frame F. The imaging unit 14 associates information (specific frame position information) indicating the position of a specific frame at the time of imaging (eg, a position in the image data, a position in the display 100) with the image data of the captured image.
 画像認識部15は、撮像部14が撮像した画像の中の特定フレーム内の一部画像のみを用いて画像認識処理を行う。画像認識部15は、撮像部14が撮像した画像の画像データを取得すると、その画像データに対応付けられている特定フレーム位置情報を利用して特定フレームF内の一部画像を特定する。そして、特定した画像の画像データのみを用いて、画像認識処理を行う。 The image recognition unit 15 performs image recognition processing using only a partial image within a specific frame in the image captured by the imaging unit 14. When the image recognition unit 15 acquires the image data of the image captured by the imaging unit 14, the image recognition unit 15 specifies a partial image in the specific frame F using specific frame position information associated with the image data. Then, image recognition processing is performed using only the image data of the specified image.
 画像認識処理は、処理対象の画像内から鋼材の表面に記されている識別マークを抽出する処理と、抽出した識別マークを用いて識別情報を認識する処理を含む。画像認識処理の詳細は特段制限されず、従来のあらゆる技術を適用することができる。画像認識部15は識別マークの特徴を示す特徴情報(特徴量)を予め保持しておき、当該特徴情報を利用して識別マークの抽出、及び、認証処理を行うことができる。画像認識部15は、ノイズ除去、平滑化、鮮鋭化、2次元フィルタリング処理、2値化、細線化、正規化(拡大・縮小、平行移動、回転移動、濃度変化等)等のあらゆる処理を実行することができる。なお、必ずしもここで例示した処理の全てを実行する必要はない。 The image recognition process includes a process of extracting the identification mark written on the surface of the steel material from the image to be processed, and a process of recognizing identification information using the extracted identification mark. Details of the image recognition process are not particularly limited, and any conventional technique can be applied. The image recognizing unit 15 holds feature information (feature amount) indicating the feature of the identification mark in advance, and can perform identification mark extraction and authentication processing using the feature information. The image recognition unit 15 executes various processes such as noise removal, smoothing, sharpening, two-dimensional filtering processing, binarization, thinning, normalization (enlargement / reduction, parallel movement, rotation movement, density change, etc.). can do. Note that it is not always necessary to execute all of the processes exemplified here.
 照合部16は、対応情報検索部13が取得した識別情報と、画像認識部15が認識した識別情報とが一致するか否か判別する。 The collation unit 16 determines whether the identification information acquired by the correspondence information search unit 13 matches the identification information recognized by the image recognition unit 15.
 出力部17は、照合部16による判別結果をディスプレイ100に表示することができる。図7に、出力部17がディスプレイ100に判別結果を表示した一例を示す。図示する例の場合、ディスプレイ100に照合対象の鋼材の番地及び段情報が表示されるとともに、照合部16による判別結果(認証結果)、及び、画像認識部15による画像認識処理の認識結果が表示されている。 The output unit 17 can display the determination result by the verification unit 16 on the display 100. FIG. 7 shows an example in which the output unit 17 displays the determination result on the display 100. In the case of the illustrated example, the address and step information of the steel material to be verified are displayed on the display 100, and the determination result (authentication result) by the verification unit 16 and the recognition result of the image recognition process by the image recognition unit 15 are displayed. Has been.
 次に、図8のフローチャートを用いて、本実施形態の照合方法の処理の流れの一例を説明する。 Next, an example of the processing flow of the collation method of this embodiment will be described using the flowchart of FIG.
 まず、入力受付部12が照合対象の鋼材の番地及び段情報の入力を受付ける(S1)。例えば、入力受付部12は、出力部17によりディスプレイ100に表示されている図5に示すようなユーザインターフェースを介して番地及び段情報の入力を受付ける。ここでは、入力受付部12は「AA1」番地、下から「2」段目の入力を受付けたとする。 First, the input receiving unit 12 receives the input of the address and step information of the steel material to be verified (S1). For example, the input receiving unit 12 receives an input of address and stage information via a user interface as shown in FIG. 5 displayed on the display 100 by the output unit 17. Here, it is assumed that the input receiving unit 12 receives an input “AA1” and the “2” level from the bottom.
 すると、対応情報検索部13は、入力受付部12が受付けた番地及び段情報をキーとして、記憶部11が記憶する対応情報を検索し、キーに対応付けられている鋼材の識別情報を取得する(S2)。ここでは、対応情報は、「AA1」、「2」の組み合わせをキーとして図4に示す対応情報を検索し、「20130101AB002」の識別情報を取得したとする。 Then, the correspondence information search unit 13 searches the correspondence information stored in the storage unit 11 using the address and step information received by the input reception unit 12 as keys, and acquires the steel material identification information associated with the key. (S2). Here, it is assumed that the correspondence information is searched for the correspondence information shown in FIG. 4 using the combination of “AA1” and “2” as a key, and the identification information “20130101AB002” is acquired.
 入力受付部12が番地及び段情報の入力を受付けると、照合システム1は撮像モードになる。撮像モードへの移行・実行とS2の処理の順番は図8に示す順に限定されず、これらが並行して実行されてもよい。照合システム1が撮像モードに移行されると、ディスプレイ100の表示が切り替わる。出力部17は、ディスプレイ100に撮像対象の画像を表示するとともに、当該画像に重ねて特定フレームFを表示する(図6参照)。作業者は、照合システム1の位置、向き等を調整することで、ディスプレイ100に照合対象の鋼材の表面に付された識別マークを表示させるとともに、当該識別マークを特定フレームF内に収める。そして、作業者は、当該状態を維持させたまま撮像指示入力(例:撮影ボタンのタッチ)を行う。すると、撮像部14は、ディスプレイ100に表示されていた画像を撮像する。そして、撮像部14は、撮像した画像の画像データに、撮像時点の特定フレームFの位置(例:画像データ内の位置、ディスプレイ100内の位置)を示す情報(特定フレーム位置情報)を対応付ける(S3)。ここでは、撮像部14は、図6に示す状態で画像を撮像したものとする。 When the input receiving unit 12 receives the input of the address and step information, the collation system 1 enters the imaging mode. The order of the transition / execution to the imaging mode and the processing of S2 is not limited to the order shown in FIG. 8, and these may be executed in parallel. When the verification system 1 is shifted to the imaging mode, the display on the display 100 is switched. The output unit 17 displays the image to be imaged on the display 100 and displays the specific frame F so as to overlap the image (see FIG. 6). The operator adjusts the position, orientation, and the like of the collation system 1 to display the identification mark attached to the surface of the steel material to be collated on the display 100 and place the identification mark in the specific frame F. Then, the worker inputs an imaging instruction (for example, touching the shooting button) while maintaining the state. Then, the imaging unit 14 captures an image displayed on the display 100. The imaging unit 14 then associates information (specific frame position information) indicating the position of the specific frame F at the time of imaging (eg, position in the image data, position in the display 100) with the image data of the captured image ( S3). Here, it is assumed that the imaging unit 14 captures an image in the state illustrated in FIG.
 すると、画像認識部15は、撮像部14が撮像した画像の中の特定フレームF内の一部画像のみを用いて画像認識処理を行う。画像認識部15は、当該画像認識処理により、複数の鋼材各々の表面に記されている識別マークを抽出するとともに(S4)、抽出した識別マークを用いて識別情報を認識する(S5)。ここでは、画像認識部15は、「20130101AB002」の識別情報を認識したとする。 Then, the image recognition unit 15 performs image recognition processing using only a partial image within the specific frame F in the image captured by the imaging unit 14. The image recognition unit 15 extracts the identification mark written on the surface of each of the plurality of steel materials by the image recognition process (S4), and recognizes the identification information using the extracted identification mark (S5). Here, it is assumed that the image recognition unit 15 has recognized the identification information “20130101AB002”.
 その後、照合部16は、S2で対応情報検索部13が取得した識別情報と、S5で画像認識部15が認識した識別情報とが一致するか否か判別(照合)する(S6)。そして、出力部17がS6での照合部16の照合結果を出力する(S7)。例えば、出力部17は、ディスプレイ100に図7に示すような照合結果を出力する。 Thereafter, the collation unit 16 determines (collation) whether the identification information acquired by the correspondence information search unit 13 in S2 matches the identification information recognized by the image recognition unit 15 in S5 (S6). And the output part 17 outputs the collation result of the collation part 16 in S6 (S7). For example, the output unit 17 outputs a collation result as shown in FIG.
 なお、S5での画像認識部15の認識が「20130101AB00?」であった場合は、最後の部分の識別マークを認識できなかった場合を示す表示方法の一例である。そして、この場合はS5で認識した識別情報が正しく認識できておらず、その結果S2で取得した識別情報と一致しないので、照合部16の照合結果が不一致(NG)となる。かかる場合、出力部17は、ディスプレイ100に例えば図9に示すような照合結果を出力する。当該表示によれば、作業者は、画像認識処理の認識精度が不十分であったため、照合結果がNGになっていることを認識できる。 In addition, when the recognition of the image recognition part 15 in S5 is "20130101AB00?", It is an example of the display method which shows the case where the identification mark of the last part was not able to be recognized. In this case, the identification information recognized in S5 is not correctly recognized, and as a result, does not match the identification information acquired in S2, so that the collation result of the collation unit 16 is inconsistent (NG). In such a case, the output unit 17 outputs a collation result as shown in FIG. According to the display, the operator can recognize that the collation result is NG because the recognition accuracy of the image recognition process is insufficient.
 このような場合、作業者は照合対象の鋼材(所定の番地の所定の段に保管されている鋼材)に付されている認識マークを視認し、視認した識別情報を入力(入力受付部12)できるように構成してもよい。かかる場合、照合部16は、入力受付部12が入力を受付けた識別情報と、S2で対応情報検索部13が取得した識別情報とが一致するか否か判別(照合)する。 In such a case, the worker visually recognizes the recognition mark attached to the steel material to be verified (steel material stored at a predetermined stage at a predetermined address) and inputs the visually recognized identification information (input receiving unit 12). You may comprise so that it can do. In such a case, the collation unit 16 determines (collates) whether the identification information received by the input receiving unit 12 matches the identification information acquired by the correspondence information search unit 13 in S2.
 例えば、図9に示すユーザインターフェースにおいて「入力画面へ」をタッチすると、図10に示すような入力画面に遷移してもよい。当該画面は、画像認識部15による認識結果「20130101AB00」が初期値として表示され、認識できなかった最後の数字部分が空欄となっている。入力受付部12は、例えばこのようなユーザインターフェースから、識別情報の入力を受付けてもよい。 For example, touching “to input screen” on the user interface shown in FIG. 9 may cause a transition to the input screen shown in FIG. In this screen, the recognition result “20130101AB00” by the image recognition unit 15 is displayed as an initial value, and the last numeric part that could not be recognized is blank. The input receiving unit 12 may receive an input of identification information from such a user interface, for example.
 ここまでは、出力部17が、(1)撮像前の画像に特定フレームFを重ねて表示する処理を実行することを前提に説明した。出力部17が、(2)撮像済みの画像に特定フレームFを重ねて表示する処理を実行する場合は、以下のような処理とすることができる。まず、S1の前、S2の前、又はS2の後に、照合システム1は、撮像済みの画像の中の1つを選択するユーザ入力を受付ける。ユーザは、予め、照合対象の鋼材の表面に付された識別マークを撮像し、撮像データを保存しておく。そして、ここでは、照合対象の鋼材の表面に付された識別マークが表示されている画像を選択する。照合システム1は、S2の後に撮像モードに移行する代わりに、選択された撮像済みの画像に特定フレームFを重ねてディスプレイ100に表示する処理を実行する。その後、ユーザは、特定フレームFの大きさ、形状及びディスプレイ100内の表示位置の少なくとも1つを必要に応じて変更し、識別マークを特定フレームF内に収める。そして、作業者は、当該状態を維持させたまま撮像入力(例:撮像ボタンのタッチ)を行う。すると、撮像部14は、ディスプレイ100に表示されている画像の画像データに、撮像入力を受付けた時点の特定フレームFの位置(例:画像データ内の位置、ディスプレイ100内の位置)を示す情報(特定フレーム位置情報)を対応付けたデータを作成し(撮像処理)、保存する(S3)。S4以降の処理は、上記例と同様である。当該処理の流れは、以下の全ての実施形態において適用可能である。 Up to this point, the description has been made on the assumption that the output unit 17 executes (1) the process of displaying the specific frame F on the image before imaging. When the output unit 17 executes (2) the process of displaying the specific frame F on the captured image, the following process can be performed. First, before S1, before S2, or after S2, the verification system 1 accepts user input to select one of the captured images. The user images in advance the identification mark attached to the surface of the steel material to be verified, and stores the image data. And here, the image in which the identification mark attached | subjected to the surface of the steel material of collation object is displayed is selected. The collation system 1 executes a process of displaying the specific frame F on the display 100 by superimposing the selected image on the captured image instead of shifting to the imaging mode after S2. Thereafter, the user changes at least one of the size and shape of the specific frame F and the display position in the display 100 as necessary, and puts the identification mark in the specific frame F. Then, the worker performs imaging input (for example, touching the imaging button) while maintaining the state. Then, the imaging unit 14 includes, in the image data of the image displayed on the display 100, information indicating the position of the specific frame F at the time of receiving the imaging input (eg, position in the image data, position in the display 100). Data associated with (specific frame position information) is created (imaging processing) and saved (S3). The processing after S4 is the same as the above example. The process flow is applicable to all the following embodiments.
 以上説明した本実施形態の照合システムによれば、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管された鋼材の照合作業を、コンピュータによる処理で実現することができる。このため、人為的ミスの発生を回避することができる。 According to the collation system of the present embodiment described above, it is possible to realize a collation operation of steel materials stacked and stored in a plurality of stages in each of a plurality of areas each assigned with an address by processing by a computer. For this reason, it is possible to avoid occurrence of human error.
 ところで、上記鋼材の照合作業においては、好ましくない撮像環境下で鋼材の撮像が行われる場合があり得るので、様々な画像認識技術(補正処理等)を利用して画像認識処理の精度を高める必要がある。しかし、このような画像認識技術(補正処理等)を採用した場合、それに起因して画像認識処理に要する処理時間が大きくなり、照合作業全体の作業効率が悪くなる恐れがある。 By the way, in the collation work of the above-mentioned steel material, since the steel material may be imaged in an unfavorable imaging environment, it is necessary to improve the accuracy of the image recognition processing by using various image recognition technologies (correction processing, etc.). There is. However, when such an image recognition technique (correction processing or the like) is adopted, the processing time required for the image recognition processing increases due to this, and there is a possibility that the work efficiency of the entire collation work is deteriorated.
 本実施形態の照合システム1は、このような問題を解決可能に構成している。すなわち、本実施形態の照合システム1は、撮像した画像すべてを用いて画像認識処理を行うのでなく、撮像した画像の中の特定フレームFで特定される画像のみを用いて画像認識処理を行う。このため、処理対象のデータ量を小さくすることができる。結果、処理時間を短くすることができる。 The collation system 1 of the present embodiment is configured to be able to solve such a problem. That is, the collation system 1 of the present embodiment does not perform image recognition processing using all the captured images, but performs image recognition processing using only the images specified by the specific frame F in the captured images. For this reason, the amount of data to be processed can be reduced. As a result, the processing time can be shortened.
 このように、本実施形態の照合システム1によれば、各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管された鋼材の照合作業を効率的に、十分な精度で行うことが可能となる。 As described above, according to the verification system 1 of the present embodiment, it is possible to efficiently and accurately perform the verification work of the steel materials stacked and stored in a plurality of stages in each of the plurality of areas each assigned with an address. It becomes possible.
<第2の実施形態>
 本実施形態の照合システム1は、同一の番地に保管されている複数の鋼材を一度に照合対象とすることができる点で、第1の実施形態の照合システム1と異なる。以下、本実施形態について説明する。なお、第1の実施形態の照合システム1と同様の構成については、適宜省略を説明する。
<Second Embodiment>
The verification system 1 of this embodiment is different from the verification system 1 of the first embodiment in that a plurality of steel materials stored at the same address can be targeted for verification at a time. Hereinafter, this embodiment will be described. In addition, about the structure similar to the collation system 1 of 1st Embodiment, omission is demonstrated suitably.
 本実施形態の照合システム1の機能ブロック図の一例は、第1の実施形態と同様に図1で示される。 An example of a functional block diagram of the collation system 1 of the present embodiment is shown in FIG. 1 as in the first embodiment.
 入力受付部12は、例えば、図11に示すようなユーザインターフェースを利用して、照合対象の鋼材の番地の入力を受付けることができる。入力受付部12が番地の入力を受付けると、対応情報検索部13は、記憶部11に記憶されている対応情報を検索し、入力受付部12が入力を受付けた番地に対応付けられている段情報(鋼材の識別情報が対応付けられているもの)を取得する。その後、出力部17は、対応情報検索部13が取得した段情報を一覧表示する。 The input reception part 12 can receive the input of the address of the steel material of collation object, for example using a user interface as shown in FIG. When the input receiving unit 12 receives an input of an address, the correspondence information searching unit 13 searches the correspondence information stored in the storage unit 11 and is associated with the address at which the input receiving unit 12 has received the input. Information (thing with which the identification information of steel materials is matched) is acquired. Thereafter, the output unit 17 displays a list of the stage information acquired by the correspondence information search unit 13.
 図12に、出力部17がディスプレイ100に段情報を一覧表示した一例を示す。ここでは、5つの段情報(丸1乃至丸5)が表示されている。ここで一覧表示される段情報の数は、対応情報上、その番地に保管されていると管理されている鋼材の数に該当する。すなわち、図12に示す例の場合、対応情報上、AA1番地に5つの鋼材が5段に積層して保管されていると管理されていることになる。作業者は、この一覧表示された数と、実際にその番地に保管されている鋼材の数とを比較することで、対応情報に存在する誤りを見つけることができる。積層されている鋼材の数が十分に小さい場合(例:1桁台)、作業者による確認ミスが生じる不都合は生じ難い。 FIG. 12 shows an example in which the output unit 17 displays a list of column information on the display 100. Here, five pieces of step information (circle 1 to circle 5) are displayed. The number of the stage information displayed in a list here corresponds to the number of steel materials managed to be stored at the address in the correspondence information. That is, in the case of the example shown in FIG. 12, it is managed that five steel materials are stacked and stored in the AA1 address in five stages in correspondence information. The operator can find an error existing in the correspondence information by comparing the number displayed in the list with the number of steel materials actually stored at the address. When the number of laminated steel materials is sufficiently small (e.g., in the single digit range), it is difficult to cause inconvenience that a confirmation error by an operator occurs.
 入力受付部12は、一覧表示されている段情報の中の1つ又は複数を選択する入力を受付けることで、1つ又は複数の照合対象の鋼材の段情報の入力を受付ける。図12に示す例の場合、各段情報に対応付けて表示されたチェックボックスにチェックを入れることで、1つ又は複数の段情報が選択可能になっている。出力部17は、入力受付部12が入力を受付けた段情報の数と同数(1つ又は複数)の特定フレームFをディスプレイ100に表示する。 The input receiving unit 12 receives an input of selecting one or a plurality of pieces of step information displayed in a list, and receives input of step information of one or a plurality of steel materials to be verified. In the example shown in FIG. 12, one or a plurality of pieces of stage information can be selected by checking a check box displayed in association with each piece of stage information. The output unit 17 displays the same number (one or more) of specific frames F as the number of stage information received by the input receiving unit 12 on the display 100.
 以下、入力受付部12が複数の段情報の入力を受付け、出力部17が複数の特定フレームFをディスプレイ100に表示する場合の各部の構成について説明する。なお、入力受付部12が1つの段情報の入力を受付け、出力部17が1つの特定フレームFをディスプレイ100に表示する場合の各部の構成は第1の実施形態と同様とすることができる。 Hereinafter, the configuration of each unit when the input receiving unit 12 receives input of a plurality of pieces of stage information and the output unit 17 displays a plurality of specific frames F on the display 100 will be described. In addition, when the input reception part 12 receives the input of one step information, and the output part 17 displays the one specific frame F on the display 100, the structure of each part can be made the same as that of 1st Embodiment.
 図13に、出力部17が複数の特定フレームFをディスプレイ100に表示した一例を示す。図示するディスプレイ100には、複数段に積み重ねて保管されている鋼材(図3参照)の一部が表示されている。そして、ディスプレイ100には、当該画像に重ねて2つの特定フレームF1及びF2が表示されている。ディスプレイ100に表示されている複数の特定フレームF1及びF2各々は、入力受付部12が入力を受付けた段情報各々と対応付けられている。出力部17は、対応付けられている段情報が識別できるように、複数の特定フレームF1及びF2を表示してもよい。 FIG. 13 shows an example in which the output unit 17 displays a plurality of specific frames F on the display 100. In the illustrated display 100, a part of steel materials (see FIG. 3) that are stacked and stored in a plurality of stages are displayed. On the display 100, two specific frames F1 and F2 are displayed so as to overlap the image. Each of the plurality of specific frames F1 and F2 displayed on the display 100 is associated with each piece of stage information received by the input receiving unit 12. The output unit 17 may display a plurality of specific frames F1 and F2 so that the associated stage information can be identified.
 図13に示す例の場合、特定フレームF1及びF2各々の左上隅に表示された丸数字が、各々に対応付けられている段情報を示している。すなわち、特定フレームF1は、下から3段目の段情報に対応付けられており、特定フレームF2は、下から4段目の段情報に対応付けられていることが識別できる。作業者は、このような情報を利用して、複数の特定フレームF1及びF2各々の中に何段目の鋼材の識別マークを収めるべきか把握することができる。 In the case of the example shown in FIG. 13, the circled numbers displayed in the upper left corner of each of the specific frames F1 and F2 indicate the stage information associated with each. That is, it can be identified that the specific frame F1 is associated with the stage information of the third stage from the bottom, and the specific frame F2 is associated with the stage information of the fourth stage from the bottom. Using such information, the operator can grasp how many steel material identification marks should be placed in each of the plurality of specific frames F1 and F2.
 なお、その他の表示形態により、各特定フレームに対応付けられている段情報を識別表示してもよい。図14にその他の例を示す。図示する例の場合、特定フレームF1及びF2に加えて、これらの特定フレームF1及びF2の上下に補助フレームG1乃至G3が表示されている。当該例の場合、特定フレームF1及びF2、及び、補助フレームG1乃至G3を含む複数のフレーム群内の位置により、段情報が識別可能になっている。例えば、図14に示す特定フレームF1は、フレーム群の中において下から3段目に位置する。これより、特定フレームF1に対応付けられている段情報は、下から3段目であることが分かる。なお、補助フレームG1乃至G3は、特定フレームF1及びF2と同じデザインで、形状及び大きさのみが異なってもよいし、もしくは、図14に示すようにデザイン自体も異なっていてもよい。なお、補助フレームG1乃至G3は、特定フレームF1及びF2に比べて小さくすることができる。このようにすれば、補助フレームG1乃至G3によりディスプレイ100に表示された撮像対象の画像の視認性が損なわれる不都合を軽減することができる。 It should be noted that the stage information associated with each specific frame may be identified and displayed by other display modes. FIG. 14 shows another example. In the illustrated example, auxiliary frames G1 to G3 are displayed above and below the specific frames F1 and F2 in addition to the specific frames F1 and F2. In the case of this example, the stage information can be identified by the positions in the plurality of frame groups including the specific frames F1 and F2 and the auxiliary frames G1 to G3. For example, the specific frame F1 shown in FIG. 14 is located at the third level from the bottom in the frame group. From this, it can be seen that the step information associated with the specific frame F1 is the third step from the bottom. The auxiliary frames G1 to G3 may have the same design as the specific frames F1 and F2 and may differ only in shape and size, or may have different designs as shown in FIG. The auxiliary frames G1 to G3 can be made smaller than the specific frames F1 and F2. In this way, it is possible to reduce the inconvenience that the visibility of the image to be captured displayed on the display 100 by the auxiliary frames G1 to G3 is impaired.
 ディスプレイ100に表示されている複数の特定フレームF1及びF2は、個別に、ユーザ入力に応じて、ディスプレイ100内の表示位置、形状及び大きさの中の少なくとも1つを変更することができてもよい。その他、1つの特定フレームFの表示位置、形状及び大きさの中の少なくとも1つを変更すると、他の特定フレームFも同様に表示位置、形状及び大きさが変更されてもよい。変更する方法は、例えば、複数点を認識可能なタッチパネル型ディスプレイの場合において大きさを変更するときにはピンチイン・ピンチアウトなどの手法によるものがあり、表示位置を変更するときには特定フレーム内の任意の位置をタッチしてドラッグするなどの方法によるものがある。または一点しか認識できないタッチパネル型ディスプレイの場合において大きさを変更するときには特定フレームを構成する一辺または二辺の交点をタッチしドラッグするなどの方法(タッチアンドスライド)によるものがあり、表示位置を変更するときには特定フレーム内の任意の位置をタッチしてドラッグするなどの方法(タッチアンドスライド)によるものがある。またはタッチパネル型でないディスプレイの場合は所定のボタンによるものがある。 Even if the specific frames F1 and F2 displayed on the display 100 can individually change at least one of the display position, shape, and size in the display 100 in accordance with a user input. Good. In addition, when at least one of the display position, shape, and size of one specific frame F is changed, the display position, shape, and size of the other specific frame F may be similarly changed. For example, in the case of a touch panel type display capable of recognizing a plurality of points, there is a method of changing by a method such as pinch-in / pinch-out when changing the size, and an arbitrary position within a specific frame when changing the display position There are methods such as touching and dragging. Or, in the case of a touch panel type display that can recognize only one point, there is a method (touch and slide) such as touching and dragging the intersection of one side or two sides constituting a specific frame to change the display size. When doing, there is a method (touch and slide) such as touching and dragging an arbitrary position in a specific frame. Alternatively, in the case of a display that is not a touch panel type, there is a display using a predetermined button.
 撮像部14は、画像を撮像すると、撮像した画像の画像データに、撮像時点の特定フレームFの位置(例:画像データ内の位置、ディスプレイ100内の位置)を示す情報(特定フレーム位置情報)を対応付ける。なお、撮像時点で複数の特定フレームFが表示されていた場合には、その画像データに複数の特定フレームF各々の位置を示す特定フレーム位置情報を対応付ける。当該複数の特定フレーム位置情報各々には、特定フレームFに対応付けられている段情報が対応付けられる。 When the image capturing unit 14 captures an image, information (specific frame position information) indicating the position of the specific frame F at the time of image capturing (eg, position in the image data, position in the display 100) in the image data of the captured image. Associate. If a plurality of specific frames F are displayed at the time of imaging, specific frame position information indicating the positions of the plurality of specific frames F is associated with the image data. Each of the plurality of specific frame position information is associated with stage information associated with the specific frame F.
 画像認識部15は、撮像部14が撮像した画像の画像データに複数の特定フレーム位置情報が対応付けられている場合、各特定フレーム位置情報で特定される一部画像各々のみを用いて画像認識処理を行う。そして、画像認識部15は、複数の認識結果(識別情報)を得る。画像認識処理の内容は第1の実施形態と同様である。画像認識部15により認識された複数の識別情報各々には、各特定フレーム位置情報に対応付けられていた段情報が対応付けられる。 When a plurality of specific frame position information is associated with the image data of the image captured by the image capturing unit 14, the image recognition unit 15 performs image recognition using only each partial image specified by each specific frame position information. Process. Then, the image recognition unit 15 obtains a plurality of recognition results (identification information). The contents of the image recognition process are the same as in the first embodiment. Each of the plurality of pieces of identification information recognized by the image recognition unit 15 is associated with step information associated with each specific frame position information.
 対応情報検索部13は、記憶部11が記憶する対応情報を検索し、入力受付部12が入力を受付けた番地と、複数の段情報のいずれかと、に対応付けられている複数の識別情報を取得する。 The correspondence information search unit 13 searches the correspondence information stored in the storage unit 11, and selects a plurality of identification information associated with the address at which the input reception unit 12 has received the input and any one of the plurality of stage information. get.
 照合部16は、対応情報検索部13が取得した複数の識別情報と、画像認識部15が認識した複数の識別情報とが一致するか否か判別する。具体的には、対応付けられている段情報が一致する識別情報同士を比較し、一致するか否か判別する。そして、照合部16は、段情報ごとに判別結果を得る。 The collation unit 16 determines whether or not the plurality of identification information acquired by the correspondence information search unit 13 and the plurality of identification information recognized by the image recognition unit 15 match. Specifically, identification information with which the corresponding stage information matches is compared to determine whether or not they match. And the collation part 16 obtains a discrimination | determination result for every stage information.
 ここで、図15に、出力部17が照合部16の判別結果をディスプレイ100に表示した一例を示す。当該例では、AA1番地に保管されている5つの鋼材すべてが照合対象となっており、各々の認識結果及び照合結果が示されている。下から2段目の鋼材は画像認識部15による認識の精度が不十分であり、その結果、認証結果がNGになっている。当該ユーザインターフェースにおいて「NG」のボタンをタッチすると、図10に示す画面に遷移してもよい。図10に示すユーザインターフェースを用いた処理は第1の実施形態と同様である。 Here, FIG. 15 shows an example in which the output unit 17 displays the discrimination result of the collation unit 16 on the display 100. In this example, all five steel materials stored at address AA1 are subject to collation, and the respective recognition results and collation results are shown. The steel material in the second stage from the bottom has insufficient accuracy of recognition by the image recognition unit 15, and as a result, the authentication result is NG. When the “NG” button is touched in the user interface, the screen shown in FIG. 10 may be displayed. Processing using the user interface shown in FIG. 10 is the same as in the first embodiment.
 以上説明した本実施形態の照合システム1によれば、第1の実施形態と同様の作用効果を実現することができる。また、複数の鋼材を一度に照合対象とすることができるので、照合処理の作業効率が向上する。また、複数の鋼材各々の識別マークは、複数の特定フレームF各々で区別して特定されるので、画像中から識別マークを抽出する処理の精度を向上させることができる。 According to the collation system 1 of the present embodiment described above, the same operational effects as those of the first embodiment can be realized. Moreover, since several steel materials can be made into the collation object at once, the working efficiency of collation processing improves. Moreover, since the identification marks of each of the plurality of steel materials are identified and identified in each of the plurality of specific frames F, the accuracy of the process of extracting the identification marks from the image can be improved.
<第3の実施形態>
 本実施形態の照合システム1は、同一の番地に保管されている複数の鋼材を一度に照合対象とすることができる点で、第1の実施形態の照合システム1と異なる。以下、本実施形態について説明する。なお、第1の実施形態の照合システム1と同様の構成については、適宜省略を説明する。
<Third Embodiment>
The verification system 1 of this embodiment is different from the verification system 1 of the first embodiment in that a plurality of steel materials stored at the same address can be targeted for verification at a time. Hereinafter, this embodiment will be described. In addition, about the structure similar to the collation system 1 of 1st Embodiment, omission is demonstrated suitably.
 本実施形態の照合システム1の機能ブロック図の一例は、第1の実施形態と同様に図1で示される。 An example of a functional block diagram of the collation system 1 of the present embodiment is shown in FIG. 1 as in the first embodiment.
 入力受付部12は、例えば、図11に示すようなユーザインターフェースを利用して、照合対象の鋼材の番地の入力を受付けることができる。すると、対応情報検索部13は、記憶部11に記憶されている対応情報を検索し、入力受付部12が入力を受付けた番地に保管されている鋼材(第1の鋼材)の数を特定する。例えば、対応情報検索部13は、記憶部11に記憶されている対応情報を検索し、入力受付部12が入力を受付けた番地に対応付けられている鋼材の識別情報の数を認識することで、その番地に保管されている第1の鋼材の数を特定してもよい。すると、出力部17は、第1の鋼材の数と同数の1つ又は複数の特定フレームFをファインダー(ディスプレイ)に表示する。 The input reception part 12 can receive the input of the address of the steel material of collation object, for example using a user interface as shown in FIG. Then, the correspondence information search part 13 searches the correspondence information memorize | stored in the memory | storage part 11, and specifies the number of the steel materials (1st steel material) stored in the address which the input reception part 12 received the input. . For example, the correspondence information search unit 13 searches the correspondence information stored in the storage unit 11 and recognizes the number of pieces of steel material identification information associated with the address at which the input reception unit 12 has accepted the input. The number of the first steel materials stored at the address may be specified. Then, the output part 17 displays the 1 or several specific flame | frame F of the same number as the number of 1st steel materials on a finder (display).
 以下、第1の鋼材の数が複数である場合の各部の構成について説明する。第1の鋼材の数が1つの場合の各部の構成は第1の実施形態と同様とすることができる。 Hereinafter, the configuration of each part when the number of the first steel materials is plural will be described. The structure of each part in case the number of 1st steel materials is one can be made the same as that of 1st Embodiment.
 図16に、出力部17がディスプレイ100に第1の鋼材の数と同数の複数の特定フレームFを表示した例を示す。ディスプレイ100には、積層された5つの鋼材が表示されている。また、鋼材の画像に重ねて、5つの特定フレームF1乃至F5が表示されている。5つの特定フレームF1乃至F5は、各々、段情報が対応付けられている。そして、出力部17は、対応付けられている段情報を識別できるように、複数の特定フレームF1乃至F5を表示している。図示する例の場合、5つの特定フレームF1乃至F5の並び順に従い、下から順に1段目乃至5段目が対応付けられている。すなわち、作業者は、複数の特定フレームF1乃至F5の並び順に基づいて、特定フレームF1乃至F5各々に対応付けられている段情報を識別することができる。作業者は、このような情報を利用して、複数の特定フレームF1乃至F5各々の中に何段目の鋼材の識別マークを収めるべきか把握することができる。なお、第2の実施形態と同様の構成により、各特定フレームに対応付けられている段情報を識別表示することもできる。 FIG. 16 shows an example in which the output unit 17 displays the same number of specific frames F as the number of first steel materials on the display 100. The display 100 displays five stacked steel materials. In addition, five specific frames F1 to F5 are displayed on the steel material image. Each of the five specific frames F1 to F5 is associated with stage information. The output unit 17 displays a plurality of specific frames F1 to F5 so that the associated stage information can be identified. In the illustrated example, the first to fifth stages are associated in order from the bottom according to the arrangement order of the five specific frames F1 to F5. That is, the worker can identify the stage information associated with each of the specific frames F1 to F5 based on the arrangement order of the plurality of specific frames F1 to F5. Using such information, the operator can grasp how many steel material identification marks should be placed in each of the plurality of specific frames F1 to F5. In addition, the stage information matched with each specific frame can also be identified and displayed by the structure similar to 2nd Embodiment.
 図16に示す例の場合、各特定フレームFに対応付けてバツマークMが表示されている。当該バツマークMをタッチすると、そのバツマークMが対応付けられている特定フレームは消滅してもよいし、又は、第2の実施形態で説明した補助フレームに変化してもよい。すなわち、入力受付部12は、複数の特定フレームF1乃至F5の中の1つ又は複数を選択する入力を受付けることで、照合対象の鋼材の段情報の入力を受付けてもよい。図16に示す例の場合、バツマークMをタッチされていない特定フレームFに対応する段情報、換言すれば、入力受付部12が撮像指示入力を受付けた時にディスプレイ100に残っている特定フレームFに対応する段情報が、照合対象の鋼材の段情報として入力される。 In the example shown in FIG. 16, a cross mark M is displayed in association with each specific frame F. When the cross mark M is touched, the specific frame associated with the cross mark M may disappear, or may change to the auxiliary frame described in the second embodiment. That is, the input reception part 12 may receive the input of the stage information of the steel materials to be verified by receiving an input for selecting one or more of the plurality of specific frames F1 to F5. In the case of the example shown in FIG. 16, the step information corresponding to the specific frame F not touched with the cross mark M, in other words, the specific frame F remaining on the display 100 when the input receiving unit 12 receives the imaging instruction input. Corresponding step information is input as step information of the steel material to be verified.
 ディスプレイ100に表示されている複数の特定フレームF1乃至F5は、個別に、ユーザ入力に応じて、ディスプレイ100内の表示位置、形状及び大きさの中の少なくとも1つを変更することができてもよい。その他、1つの特定フレームFの表示位置、形状及び大きさの中の少なくとも1つを変更すると、他の特定フレームFも同様に表示位置、形状及び大きさが変更されてもよい。変更する方法は、例えば、複数点を認識可能なタッチパネル型ディスプレイの場合において大きさを変更するときにはピンチイン・ピンチアウトなどの手法によるものがあり、表示位置を変更するときには特定フレーム内の任意の位置をタッチしてドラッグするなどの方法によるものがある。または一点しか認識できないタッチパネル型ディスプレイの場合において大きさを変更するときには特定フレームを構成する一辺または二辺の交点をタッチしドラッグするなどの方法によるものがあり、表示位置を変更するときには特定フレーム内の任意の位置をタッチしてドラッグするなどの方法によるものがある。またはタッチパネル型でないディスプレイの場合は所定のボタンによるものがある。 Even if the specific frames F1 to F5 displayed on the display 100 can individually change at least one of the display position, shape, and size in the display 100 in accordance with a user input. Good. In addition, when at least one of the display position, shape, and size of one specific frame F is changed, the display position, shape, and size of the other specific frame F may be similarly changed. For example, in the case of a touch panel type display capable of recognizing a plurality of points, there is a method of changing by a method such as pinch-in / pinch-out when changing the size, and an arbitrary position within a specific frame when changing the display position There are methods such as touching and dragging. Or, in the case of a touch panel display that can recognize only one point, when changing the size, there is a method such as touching and dragging the intersection of one side or two sides that make up a specific frame, and when changing the display position, within the specific frame There is a method such as touching and dragging an arbitrary position of. Alternatively, in the case of a display that is not a touch panel type, there is a display using a predetermined button.
 撮像部14は、画像を撮像すると、撮像した画像の画像データに、撮像時点の特定フレームFの位置(例:画像データ内の位置、ディスプレイ100内の位置)を示す情報(特定フレーム位置情報)を対応付ける。なお、撮像時点で複数の特定フレームFが表示されていた場合には、その画像データに複数の特定フレームF各々の位置を示す特定フレーム位置情報を対応付ける。当該複数の特定フレーム位置情報各々には、特定フレームFに対応付けられている段情報が対応付けられる。 When the image capturing unit 14 captures an image, information (specific frame position information) indicating the position of the specific frame F at the time of image capturing (eg, position in the image data, position in the display 100) in the image data of the captured image. Associate. If a plurality of specific frames F are displayed at the time of imaging, specific frame position information indicating the positions of the plurality of specific frames F is associated with the image data. Each of the plurality of specific frame position information is associated with stage information associated with the specific frame F.
 画像認識部15は、撮像部14が撮像した画像の画像データに複数の特定フレーム位置情報が対応付けられている場合、各特定フレーム位置情報で特定される一部画像各々のみを用いて画像認識処理を行う。そして、画像認識部15は、複数の認識結果(識別情報)を得る。画像認識処理の内容は第1の実施形態と同様である。画像認識部15により認識された複数の識別情報各々には、各特定フレーム位置情報に対応付けられていた段情報が対応付けられる。 When a plurality of specific frame position information is associated with the image data of the image captured by the image capturing unit 14, the image recognition unit 15 performs image recognition using only each partial image specified by each specific frame position information. Process. Then, the image recognition unit 15 obtains a plurality of recognition results (identification information). The contents of the image recognition process are the same as in the first embodiment. Each of the plurality of pieces of identification information recognized by the image recognition unit 15 is associated with step information associated with each specific frame position information.
 対応情報検索部13は、記憶部11が記憶する対応情報を検索し、入力受付部12が入力を受付けた番地と、複数の段情報のいずれかと、に対応付けられている複数の識別情報を取得する。 The correspondence information search unit 13 searches the correspondence information stored in the storage unit 11, and selects a plurality of identification information associated with the address at which the input reception unit 12 has received the input and any one of the plurality of stage information. get.
 照合部16は、対応情報検索部13が取得した複数の識別情報と、画像認識部15が認識した複数の識別情報とが一致するか否か判別する。具体的には、対応付けられている段情報が一致する識別情報同士を比較し、一致するか否か判別する。そして、照合部16は、段情報ごとに判別結果を得る。 The collation unit 16 determines whether or not the plurality of identification information acquired by the correspondence information search unit 13 and the plurality of identification information recognized by the image recognition unit 15 match. Specifically, identification information with which the corresponding stage information matches is compared to determine whether or not they match. And the collation part 16 obtains a discrimination | determination result for every stage information.
 ここで、図15に、出力部17が照合部16の判別結果をディスプレイ100に表示した一例を示す。当該例では、AA1番地に保管されている5つの鋼材すべてが照合対象となっており、各々の認識結果及び照合結果が示されている。下から2段目の鋼材は画像認識部15による認識の精度が不十分であり、その結果、照合結果がNGになっている。当該ユーザインターフェースにおいて「NG」のボタンをタッチすると、図10に示す画面に遷移してもよい。図10に示すユーザインターフェースを用いた処理は第1の実施形態と同様である。 Here, FIG. 15 shows an example in which the output unit 17 displays the discrimination result of the collation unit 16 on the display 100. In this example, all five steel materials stored at address AA1 are subject to collation, and the respective recognition results and collation results are shown. The steel material in the second level from the bottom has insufficient recognition accuracy by the image recognition unit 15, and as a result, the collation result is NG. When the “NG” button is touched in the user interface, the screen shown in FIG. 10 may be displayed. Processing using the user interface shown in FIG. 10 is the same as in the first embodiment.
 以上説明した本実施形態の照合システム1によれば、第1及び第2の実施形態と同様の作用効果を実現することができる。また、複数の鋼材を一度に照合対象とすることができるので、照合処理の作業効率が向上する。また、複数の鋼材各々の識別マークは、複数の特定フレームF各々で区別して特定されるので、画像中から識別マークを抽出する処理の精度を向上させることができる。 According to the verification system 1 of the present embodiment described above, the same operational effects as those of the first and second embodiments can be realized. Moreover, since several steel materials can be made into the collation object at once, the working efficiency of collation processing improves. Moreover, since the identification marks of each of the plurality of steel materials are identified and identified in each of the plurality of specific frames F, the accuracy of the process of extracting the identification marks from the image can be improved.
<第4の実施形態>
 本実施形態の照合システム1は、複数の鋼材を一度に照合対象とすることができる第2の実施形態及び第3の実施形態の構成を基本とする。すなわち、本実施形態の照合システ1は、複数の特定フレームFをディスプレイ100に表示することができる。
<Fourth Embodiment>
The collation system 1 of this embodiment is based on the configuration of the second embodiment and the third embodiment that can target a plurality of steel materials at one time. That is, the collation system 1 of the present embodiment can display a plurality of specific frames F on the display 100.
 ところで、複数段に積み重ねて保管された複数の鋼材の識別マークの位置が、図3に示すようにほぼ一列(例:積層方向に一列。図3の上下方向。)に揃っていると、図13、図14及び図16に示すような一列に揃った複数の特定フレームF各々の中に、複数の識別マーク各々を収めることができる。しかし、図17に示すように、識別マークは一列に揃わず、その位置がばらつくことがある。このような場合、図17の左側の図に示すように、一列に揃った複数の特定フレームF各々の中に、複数の識別マーク各々を収めることができない。 By the way, when the positions of the identification marks of a plurality of steel materials stacked and stored in a plurality of stages are substantially aligned in one row as shown in FIG. 3 (eg, one row in the stacking direction; the vertical direction in FIG. 3), Each of a plurality of identification marks can be accommodated in each of a plurality of specific frames F arranged in a line as shown in FIGS. However, as shown in FIG. 17, the identification marks are not aligned in a line, and their positions may vary. In such a case, as shown in the diagram on the left side of FIG. 17, it is impossible to fit each of the plurality of identification marks in each of the plurality of specific frames F arranged in a line.
 そこで、本実施形態の入力受付部12は、複数の特定フレームFを各々個別に移動させることができる(図17の右側の図)。例えば、図17に示すユーザインターフェースにおいて、特定フレームFをタッチアンドスライドさせることで、特定フレームFの表示位置が移動してもよい。結果、図17の右側の図に示すように、識別マークが一列に揃っていない場合であっても、複数の特定フレームF各々の中に、複数の識別マーク各々を収めることができる。 Therefore, the input receiving unit 12 of the present embodiment can individually move a plurality of specific frames F (the diagram on the right side of FIG. 17). For example, in the user interface shown in FIG. 17, the display position of the specific frame F may be moved by touching and sliding the specific frame F. As a result, as shown in the diagram on the right side of FIG. 17, even if the identification marks are not aligned, each of the plurality of identification marks can be contained in each of the plurality of specific frames F.
 本実施形態の照合システム1のその他の構成は、第1乃至第3の実施形態と同様である。 Other configurations of the collation system 1 of the present embodiment are the same as those of the first to third embodiments.
 以上説明した本実施形態の照合システム1によれば、第1乃至第3の実施形態と同様の作用効果を実現することができる。また、複数段に積み重ねて保管された複数の鋼材の識別マークの位置が一列に揃わず、ばらついている場合であっても、複数の特定フレームF各々の中に、複数の識別マーク各々を収めることができる。 According to the verification system 1 of the present embodiment described above, the same operational effects as those of the first to third embodiments can be realized. Further, even when the positions of the identification marks of the plurality of steel materials stacked and stored in a plurality of stages are not aligned and vary, the plurality of identification marks are stored in each of the plurality of specific frames F. be able to.
<第5の実施形態>
 本実施形態の照合システム1は、複数の鋼材を一度に照合対象とすることができる第2の実施形態及び第3の実施形態の構成を基本とする。すなわち、本実施形態の照合システ1は、複数の特定フレームFをディスプレイ100に表示することができる。
<Fifth Embodiment>
The collation system 1 of this embodiment is based on the configuration of the second embodiment and the third embodiment that can target a plurality of steel materials at one time. That is, the collation system 1 of the present embodiment can display a plurality of specific frames F on the display 100.
 本実施形態の照合システム1は、複数段に積み重ねて保管された複数の鋼材の識別マークの位置が一列(例:積層方向に一列)に揃わず、ばらついている場合の不都合を、第4の実施形態と異なる構成で解決可能に構成している。 The collation system 1 of the present embodiment has a fourth problem in the case where the positions of the identification marks of a plurality of steel materials stacked and stored in a plurality of stages are not aligned in one line (eg, one line in the stacking direction) and vary. The configuration is different from that of the embodiment and can be solved.
 入力受付部12は、ディスプレイ100に表示されている複数の特定フレームFの中の一部を指定する指定入力と、一部の特定フレームFが指定されている状態で撮像する撮像指示入力とを受付ける。撮像部14は、入力受付部12が受付けた撮像指示入力に従い撮像する。そして、画像認識部15は、撮像部14が撮像した画像の中の当該画像撮像時に指定されていた特定フレームF内の一部画像のみを用いて画像認識処理を行う。なお、出力部17は、指定された状態で撮像されたことがある特定フレームFと、指定された状態で撮像されたことがない特定フレームFとを識別可能に表示してもよい。以下具体例を用いて、より詳細に説明する。 The input receiving unit 12 receives a designation input for designating a part of the plurality of specific frames F displayed on the display 100 and an imaging instruction input for imaging in a state where some of the specific frames F are designated. Accept. The imaging unit 14 captures an image according to the imaging instruction input received by the input receiving unit 12. Then, the image recognition unit 15 performs image recognition processing using only a partial image in the specific frame F designated at the time of image capturing in the image captured by the image capturing unit 14. Note that the output unit 17 may display the specific frame F that has been imaged in the designated state and the specific frame F that has not been imaged in the designated state in an identifiable manner. This will be described in more detail below using specific examples.
 図18に出力部17による表示例を示す。図示する例の場合、ディスプレイ100には複数の鋼材の一部が表示されている。複数の鋼材各々に表示された識別マークの位置は一列(例:積層方向に一列)に揃わず、ばらついている。ディスプレイ100には、3つの特定フレームF1乃至F3が表示されている。そして、各特定フレームF1乃至F3の左上隅に、3つの記号が記されている。これら3つの記号は、左から順に、「対応付けられている段情報を示す情報」、「指定された状態で撮像されたことがあるか否かを示す情報」、「指定されているか否かを示す情報」となっている。「対応付けられている段情報を示す情報」は、第2の実施形態で説明したとおりである。 FIG. 18 shows a display example by the output unit 17. In the case of the illustrated example, a part of the plurality of steel materials is displayed on the display 100. The positions of the identification marks displayed on each of the plurality of steel materials are not aligned in one row (eg, one row in the stacking direction), but vary. On the display 100, three specific frames F1 to F3 are displayed. Three symbols are written in the upper left corner of each of the specific frames F1 to F3. These three symbols are, in order from the left, “information indicating the associated stage information”, “information indicating whether or not the image has been captured in the specified state”, “whether or not specified. Information ”. “Information indicating the associated stage information” is as described in the second embodiment.
 「指定された状態で撮像されたことがあるか否かを示す情報」は「済」又は「未」の文字である。「済」は指定された状態で撮像されたことがあることを示し、「未」は指定された状態で撮像されたことがないことを示す。「指定されているか否かを示す情報」はチェックボックスとなっており、作業者による入力が可能になっている。チェックが入っている特定フレームFは指定されている特定フレームFであり、チェックが入っていない特定フレームFは指定されていない特定フレームFである。なお、指定された状態で撮像されたことがある特定フレームFはチェックできないようになっている。 “Information indicating whether or not an image has been taken in a specified state” is a character “completed” or “uncompleted”. “Done” indicates that the image has been picked up in the specified state, and “Not” indicates that the image has not been picked up in the specified state. “Information indicating whether or not it is designated” is a check box that can be input by an operator. The specific frame F that is checked is the specified specific frame F, and the specific frame F that is not checked is the specific frame F that is not specified. Note that the specific frame F that has been imaged in the designated state cannot be checked.
 作業者は、例えば図18に示すように複数の特定フレームFの中の一部のみ(図の場合、F1)がそのフレーム内に識別マークを収めている場合、その一部のみを指定する。すなわち、チェックボックスにチェックを入れる。もしくは1つまたは複数の特定フレームFを指定し、その指定した特定フレームF内に所定の鋼材の識別マークを収める。そして、当該状態を維持したまま撮像指示入力(撮影ボタンのタッチ)を行う。入力受付部12が撮像指示入力を受付けると、撮像部14は画像を撮像し、撮像した画像の画像データに、撮像時点で指定されていた特定フレームF1の位置を示す情報(特定フレーム位置情報)を対応付ける。また、入力受付部12は撮像時点で指定されていた特定フレームF1に対応付けられている段情報を、照合対象の鋼材の段情報として受付ける。そして、入力受付部12が受付けた段情報は、上記画像の画像データに対応付けられる。 For example, as shown in FIG. 18, when only a part of a plurality of specific frames F (F1 in the figure) contains an identification mark in the frame, the worker designates only a part of the identification marks. That is, check the check box. Alternatively, one or a plurality of specific frames F are designated, and a predetermined steel material identification mark is placed in the designated specific frames F. Then, an imaging instruction is input (touching the imaging button) while maintaining the state. When the input receiving unit 12 receives an imaging instruction input, the imaging unit 14 captures an image, and information (specific frame position information) indicating the position of the specific frame F1 specified at the time of imaging in the image data of the captured image. Associate. Moreover, the input reception part 12 receives the step information matched with the specific flame | frame F1 designated at the time of imaging as step information of the steel materials of collation object. The step information received by the input receiving unit 12 is associated with the image data of the image.
 図18に示す状態で撮像部14が画像を撮像した後も、出力部17は図18に示す画面表示を継続してもよい。ただし、特定フレームF1は指定された状態で撮像されたので、左上隅にある「未」の文字が「済」に代わる。また、特定フレームF3と同様に、チェックボックスが選択できない状態となる。作業者はそのような表示を視認することで、未だ撮像を行っていないのは下から2段目の鋼材の識別マークであることを認識することができる。 The output unit 17 may continue the screen display illustrated in FIG. 18 even after the imaging unit 14 captures an image in the state illustrated in FIG. However, since the specific frame F1 is captured in the designated state, the “unfinished” character in the upper left corner is replaced with “done”. Further, like the specific frame F3, the check box cannot be selected. By visually recognizing such a display, the operator can recognize that it is the second stage steel material identification mark that has not yet been imaged.
 作業者は、1つ又は複数の鋼材の識別マークを撮像すると、照合処理を開始する指示入力(照合ボタンのタッチ)を行うことができる。入力受付部12がこのような照合処理を開始する指示入力を受付けると、それまでに撮像された画像の画像データ、特定フレーム位置情報及び段情報を用いて、画像認識部15、照合部16、対応情報検索部13及び記憶部11による照合処理、及び、出力部17による照合結果の出力が行われる。画像認識部15、照合部16、対応情報検索部13及び記憶部11による照合処理、及び、出力部17による照合結果の出力の内容は第1乃至第4の実施形態と同様である。 The operator can input an instruction to start the collation process (touch the collation button) after imaging one or a plurality of steel identification marks. When the input receiving unit 12 receives an instruction input for starting such a collation process, the image recognition unit 15, the collation unit 16, Collation processing by the correspondence information search unit 13 and the storage unit 11 and collation result output by the output unit 17 are performed. The contents of collation processing by the image recognition unit 15, collation unit 16, correspondence information retrieval unit 13 and storage unit 11, and collation result output by the output unit 17 are the same as those in the first to fourth embodiments.
 以上説明した本実施形態の照合システム1によれば、第1乃至第4の実施形態と同様の作用効果を実現することができる。また、複数段に積み重ねて保管された複数の鋼材の識別マークの位置が一列に揃わず、ばらついている場合であっても、複数の特定フレームF各々の中に、個別に複数の識別マーク各々を収め、撮像することができる。なお、作業者は、各特定フレームFに対応付けられている「指定された状態で撮像されたことがあるか否かを示す情報」を参照することで、複数段に積み重ねて保管された複数の鋼材の中の、いまだ照合対象としていない鋼材を認識することができる。 According to the verification system 1 of the present embodiment described above, the same operational effects as those of the first to fourth embodiments can be realized. In addition, even when the positions of the identification marks of the plurality of steel materials stacked and stored in a plurality of stages are not aligned in a line and vary, each of the plurality of identification marks is individually included in each of the plurality of specific frames F. Can be captured and imaged. The worker refers to the “information indicating whether or not the image has been captured in the specified state” associated with each specific frame F, so that a plurality of stacked and stored in a plurality of stages are stored. Steel materials that have not yet been verified can be recognized.
<第6の実施形態>
 本実施形態の照合システム1は、互いに有線及び/又は無線で通信可能に構成された端末装置と、サーバ装置とを有する点で、第1乃至第5の実施形態と異なる。
<Sixth Embodiment>
The collation system 1 of this embodiment is different from the first to fifth embodiments in that it includes a terminal device and a server device that are configured to be able to communicate with each other by wire and / or wirelessly.
 図19に、本実施形態の照合システム1の機能ブロック図の一例を示す。端末装置2は、入力受付部12と、撮像部14と、出力部17と、端末側送受信部18とを有する。サーバ装置3は、記憶部11と、対応情報検索部13と、画像認識部15と、照合部16と、サーバ側送受信部19とを有する。端末装置2とサーバ装置3は、端末側送受信部18及びサーバ側送受信部19を介して通信可能なっている。 FIG. 19 shows an example of a functional block diagram of the verification system 1 of the present embodiment. The terminal device 2 includes an input reception unit 12, an imaging unit 14, an output unit 17, and a terminal side transmission / reception unit 18. The server device 3 includes a storage unit 11, a correspondence information search unit 13, an image recognition unit 15, a collation unit 16, and a server side transmission / reception unit 19. The terminal device 2 and the server device 3 can communicate with each other via the terminal side transmission / reception unit 18 and the server side transmission / reception unit 19.
 図20に、本実施形態の照合システム1の機能ブロック図の他の一例を示す。端末装置2は、入力受付部12と、撮像部14と、画像認識部15と、出力部17と、端末側送受信部18とを有する。サーバ装置3は、記憶部11と、対応情報検索部13と、照合部16と、サーバ側送受信部19とを有する。端末装置2とサーバ装置3は、端末側送受信部18及びサーバ側送受信部19を介して通信可能なっている。 FIG. 20 shows another example of a functional block diagram of the verification system 1 of the present embodiment. The terminal device 2 includes an input reception unit 12, an imaging unit 14, an image recognition unit 15, an output unit 17, and a terminal side transmission / reception unit 18. The server device 3 includes a storage unit 11, a correspondence information search unit 13, a collation unit 16, and a server side transmission / reception unit 19. The terminal device 2 and the server device 3 can communicate with each other via the terminal side transmission / reception unit 18 and the server side transmission / reception unit 19.
 図19及び図20に示す端末側送受信部18及びサーバ側送受信部19を除くその他の部の構成は、第1乃至第5の実施形態と同様である。 19 are the same as those of the first to fifth embodiments except for the terminal-side transceiver unit 18 and the server-side transceiver unit 19 shown in FIG.
 端末側送受信部18及びサーバ側送受信部19は、有線及び/又は無線で通信可能に構成されており、データの送受信が可能になっている。 The terminal-side transmitting / receiving unit 18 and the server-side transmitting / receiving unit 19 are configured to be able to communicate with each other by wire and / or wirelessly, and can transmit and receive data.
 端末側送受信部18は、撮像部14が撮像した画像の中の特定フレームF内の一部画像の画像データのみをサーバ装置3(外部装置)に送信してもよい。または、端末側送受信部18は、撮像部14が撮像した画像の中の特定フレームF内の一部画像を識別する情報(例:特定フレームFの位置を示す特定フレーム位置情報)とともに、画像のデータをサーバ装置3(外部装置)に送信してもよい。 The terminal-side transmitting / receiving unit 18 may transmit only the image data of a partial image in the specific frame F in the image captured by the imaging unit 14 to the server device 3 (external device). Alternatively, the terminal-side transmission / reception unit 18 includes information for identifying a partial image in the specific frame F in the image captured by the imaging unit 14 (eg, specific frame position information indicating the position of the specific frame F) and the image Data may be transmitted to the server device 3 (external device).
 以下、図21のシーケンス図を用いて、本実施形態の照合方法の処理の流れの一例を説明する。なお、端末側送受信部18は、撮像部14が撮像した画像の中の特定フレームF内の一部画像を識別する情報(例:特定フレームFの位置を示す特定フレーム位置情報)とともに、画像のデータをサーバ装置3(外部装置)に送信するものとする。 Hereinafter, an example of the processing flow of the matching method of the present embodiment will be described with reference to the sequence diagram of FIG. The terminal-side transmitting / receiving unit 18 includes information for identifying a partial image in the specific frame F in the image captured by the imaging unit 14 (for example, specific frame position information indicating the position of the specific frame F) and the image It is assumed that data is transmitted to the server device 3 (external device).
 まず、端末装置2の入力受付部12が、例えば図11に示すようなユーザインターフェースを介して、照合対象の鋼材の番地の入力を受付ける(S10)。すると、端末装置2の端末側送受信部18が番地の情報をサーバ装置3に送信する(S11)。サーバ装置3は、サーバ側送受信部19を介して番地の情報を受信する。その後、サーバ装置3の対応情報検索部13はその番地をキーとして記憶部11が記憶している対応情報(図4参照)を検索し、キーに対応付けられている段情報(鋼材の識別情報が対応付けられているもの)を取得する(S12)。すると、サーバ装置3のサーバ側送受信部19は、取得した段情報を端末装置2に返信する(S13)。端末装置2は、端末側送受信部18を介して段情報を取得する。なお、ここで送受信される段情報には、S11で受信した番地が対応付けられていてもよい。 First, the input receiving unit 12 of the terminal device 2 receives the input of the address of the steel material to be verified through, for example, a user interface as shown in FIG. 11 (S10). Then, the terminal side transmission / reception unit 18 of the terminal device 2 transmits the address information to the server device 3 (S11). The server device 3 receives the address information via the server side transmission / reception unit 19. Thereafter, the correspondence information search unit 13 of the server device 3 searches the correspondence information (see FIG. 4) stored in the storage unit 11 using the address as a key, and the step information (steel material identification information) associated with the key. Are associated with each other) (S12). Then, the server side transmission / reception unit 19 of the server device 3 returns the acquired stage information to the terminal device 2 (S13). The terminal device 2 acquires the stage information via the terminal side transmission / reception unit 18. The stage information transmitted and received here may be associated with the address received in S11.
 すると、端末装置2の出力部17は、例えば図12に示すように、取得した段情報をディスプレイ100に一覧表示する。そして、入力受付部12は、当該ユーザインターフェースから、1つ又は複数の段情報を指定する入力を受付ける(S14)。1つ又は複数の段情報を指定する入力を受付けると、端末装置2は撮像モードに切り替わる。すなわち、出力部17は、例えば図13に示すように、ディスプレイ100に撮像対象となる画像を表示する。また、出力部17は、ディスプレイ100に、S14で指定された段情報と同数の特定フレームF1及びF2を表示する。なお、特定フレームF1及びF2各々の左上隅に表示された丸数字が、各々に対応付けられている段情報を示している。作業者は、このような情報を利用して、複数の特定フレームF1及びF2各々の中に何段目の鋼材の識別マークを収めるべきか把握することができる。 Then, the output unit 17 of the terminal device 2 displays the acquired stage information on the display 100 as a list as shown in FIG. And the input reception part 12 receives the input which designates one or several step information from the said user interface (S14). When receiving an input designating one or more pieces of stage information, the terminal device 2 switches to the imaging mode. That is, the output unit 17 displays an image to be imaged on the display 100 as shown in FIG. Further, the output unit 17 displays the same number of specific frames F1 and F2 as the stage information specified in S14 on the display 100. Note that the circled numbers displayed in the upper left corner of each of the specific frames F1 and F2 indicate the stage information associated with each. Using such information, the operator can grasp how many steel material identification marks should be placed in each of the plurality of specific frames F1 and F2.
 作業者は、照合システム1の位置、向き等を調整することで、表示されている1つ又は複数の特定フレームF各々に所定の識別マークを収める。そして、作業者は、当該状態を維持させたまま撮像指示入力(例:撮影ボタンのタッチ)を行う。すると、撮像部14は、ディスプレイ100に表示されていた画像を撮像する(S15)。そして、撮像部14は、撮像した画像の画像データに、撮像時点で表示されている1つ又は複数の特定フレームF各々の位置を示す情報(特定フレーム位置情報)を対応付ける。なお、特定フレーム位置情報が複数対応付けられる場合、各々に、各特定フレームFに対応付けられていた段情報が対応付けられる。 The operator adjusts the position, orientation, etc. of the verification system 1 to place a predetermined identification mark in each of the displayed one or more specific frames F. Then, the worker inputs an imaging instruction (for example, touching the shooting button) while maintaining the state. Then, the imaging unit 14 captures an image displayed on the display 100 (S15). Then, the imaging unit 14 associates information (specific frame position information) indicating the position of each of the one or more specific frames F displayed at the time of imaging with the image data of the captured image. When a plurality of pieces of specific frame position information are associated with each other, the stage information associated with each specific frame F is associated with each.
 すると、端末装置2の端末側送受信部18は、撮像画像の画像データを、対応付けられている特定フレーム位置情報及び段情報とともに、サーバ装置3に送信する(S16)。サーバ装置3は、サーバ側送受信部19を介して、撮像画像の画像データ、特定フレーム位置情報及び段情報を受信する。なお、ここで送受信される画像データ、特定フレーム位置情報及び段情報には、S13で受信した段情報に対応付けられていた番地が対応付けられていてもよい。 Then, the terminal-side transmission / reception unit 18 of the terminal device 2 transmits the image data of the captured image to the server device 3 together with the specific frame position information and the step information that are associated with each other (S16). The server device 3 receives the image data of the captured image, the specific frame position information, and the step information via the server side transmission / reception unit 19. The image data, the specific frame position information, and the stage information transmitted / received here may be associated with the address associated with the stage information received in S13.
 その後、サーバ装置3の画像認識部15は、撮像部14が撮像した画像の中の特定フレーム内の一部画像のみを用いて画像認識処理を行う。画像認識部15は、1つ又は複数の鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した識別マークを用いて識別情報を認識する(S17)。また、対応情報検索部13は、S16で取得した番地及び段情報をキーとして記憶部11の対応情報を検索し、1つ又は複数の識別情報を取得する。そして、照合部16は、対応情報検索部13が取得した識別情報と、画像認識部15が認識した識別情報とを利用して、これらが一致するか否か判別する(S18:照合処理)。なお、対応情報検索部13が取得した識別情報、及び、画像認識部15が認識した識別情報が各々複数存在する場合、段情報が一致する識別情報同士を照合処理する。 Thereafter, the image recognition unit 15 of the server device 3 performs image recognition processing using only a partial image within a specific frame in the image captured by the imaging unit 14. The image recognition unit 15 extracts the identification mark written on the surface of each of the one or more steel materials, and recognizes the identification information using the extracted identification mark (S17). Further, the correspondence information search unit 13 searches the correspondence information in the storage unit 11 using the address and step information acquired in S16 as a key, and acquires one or a plurality of identification information. And the collation part 16 discriminate | determines whether these match using the identification information which the corresponding | compatible information search part 13 acquired, and the identification information which the image recognition part 15 recognized (S18: collation process). When there are a plurality of pieces of identification information acquired by the correspondence information search unit 13 and a plurality of pieces of identification information recognized by the image recognition unit 15, the identification information with the matching step information is collated.
 すると、サーバ装置3のサーバ側送受信部19は、S18の判別結果を端末装置2に返信する(S19)。端末装置2は、端末側送受信部18を介して判別結果を取得する。なお、ここで送受信される判別結果には、番地及び段情報が対応付けられていてもよい。その後、端末装置2の出力部17は、例えば、図15に示すようなユーザインターフェースを作成して、ディスプレイ100に表示する(S20)。 Then, the server side transmission / reception unit 19 of the server device 3 returns the determination result of S18 to the terminal device 2 (S19). The terminal device 2 acquires the determination result via the terminal side transmission / reception unit 18. Note that the determination result transmitted and received here may be associated with an address and stage information. Thereafter, the output unit 17 of the terminal device 2 creates, for example, a user interface as shown in FIG. 15 and displays it on the display 100 (S20).
 以上説明した本実施形態の照合システム1によれば、第1乃至第5の実施形態と同様の作用効果を実現することができる。 According to the verification system 1 of the present embodiment described above, the same operational effects as those of the first to fifth embodiments can be realized.
<付記>
 以下、参考形態の例を付記する。
1. 各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合システムであって、
 保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する記憶手段と、
 照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付手段と、
 前記対応情報を参照し、前記入力受付手段が入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索手段と、
 ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
 前記ファインダーに表示されている画像を撮像する撮像手段と、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識手段と、
 前記対応情報検索手段が取得した前記識別情報と、前記画像認識手段が認識した前記識別情報とが一致するか否か判別する照合手段と、
を有する照合システム。
2. 1に記載の照合システムにおいて、
 前記出力手段は、前記照合手段の判別結果を出力する照合システム。
3. 1又は2に記載の照合システムにおいて、
 前記出力手段は、前記画像認識手段による認識結果を出力する照合システム。
4. 1から3のいずれかに記載の照合システムにおいて、
 前記照合システムは同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とすることができ、
 前記入力受付手段は、照合対象の複数の前記鋼材が保管されている前記番地及び複数の前記段情報の入力を受付けることができ、
 前記出力手段は、前記入力受付手段が入力を受付けた前記段情報の数と同数の複数の前記特定フレームを前記ファインダーに表示する照合システム。
5. 4に記載の照合システムにおいて、
 複数の前記特定フレーム各々は、前記入力受付手段が入力を受付けた前記段情報各々と対応付けられており、
 前記出力手段は、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示する照合システム。
6. 4又は5に記載の照合システムにおいて、
 前記対応情報検索手段は、前記入力受付手段が入力を受付けた前記番地に対応付けられている前記段情報を取得し、
 前記出力手段は、前記対応情報検索手段が取得した前記段情報を一覧表示し、
 前記入力受付手段は、前記一覧表示されている前記段情報の中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付ける照合システム。
7. 1から3のいずれかに記載の照合システムにおいて、
 前記照合システムは同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とすることができ、
 前記対応情報検索手段は、前記入力受付手段が入力を受付けた前記番地に保管されている前記鋼材である第1の前記鋼材の数を、前記対応情報において当該番地に対応付けられている前記鋼材の前記識別情報の数を認識することで特定し、
 前記出力手段は、前記第1の鋼材の数と同数の複数の前記特定フレームを前記ファインダーに表示する照合システム。
8. 7に記載の照合システムにおいて、
 複数の前記特定フレーム各々には、前記第1の鋼材各々に対応付けられている前記段情報が対応付けられており、
 前記出力手段は、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示する照合システム。
9. 8に記載の照合システムにおいて、
 前記入力受付手段は、前記ファインダーに表示されている複数の前記特定フレームの中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付ける照合システム。
10. 4から9のいずれかに記載の照合システムにおいて、
 前記ファインダーに表示されている複数の前記特定フレームは、個別に、前記ファインダー内の表示位置、形状及び大きさの中の少なくとも1つを変更することができる照合システム。
11. 4から10のいずれかに記載の照合システムにおいて、
 前記入力受付手段は、前記ファインダーに表示されている複数の前記特定フレームの中の一部を指定する指定入力と、一部の前記特定フレームが指定されている状態で撮像する撮像指示入力とを受付け、
 前記撮像手段は、前記入力受付手段が受付けた前記撮像指示入力に従い撮像し、
 前記画像認識手段は、前記撮像手段が撮像した前記画像の中の当該画像撮像時に指定されていた前記特定フレーム内の一部画像のみを用いて画像認識処理を行う照合システム。
12. 11に記載の照合システムにおいて、
 前記出力手段は、指定された状態で撮像されたことがある前記特定フレームと、指定された状態で撮像されたことがない前記特定フレームとを識別可能に表示する照合システム。
13. 1から12のいずれかに記載の照合システムにおいて、
 前記照合システムは、互いに通信可能に構成された端末装置と、サーバ装置とを有し、
 前記端末装置は、前記入力受付手段と、前記出力手段と、前記撮像手段とを有し、
 前記サーバ装置は、前記記憶手段と、前記対応情報検索手段と、前記照合手段とを有し、
 前記端末装置及び前記サーバ装置のいずれかが、前記画像認識手段を備える照合システム。
14. 1から12のいずれかに記載の照合システムが有する前記入力受付手段と、前記出力手段と、前記撮像手段とを備える端末装置。
15. ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
 前記ファインダーに表示されている前記画像を撮像する撮像手段と、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを外部装置に送信する送信手段と、を有する端末装置。
16. ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
 前記ファインダーに表示されている前記画像を撮像する撮像手段と、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像を識別する情報とともに、前記画像を外部装置に送信する送信手段と、を有する端末装置。
17. 1から12のいずれかに記載の照合システムが有する前記記憶手段と、前記対応情報検索手段と、前記照合手段とを備えるサーバ装置。
18. 17に記載のサーバ装置において、1から12のいずれかに記載の照合システムが有する前記画像認識手段をさらに備えるサーバ装置。
19. ファインダーに表示されている画像を撮像する撮像手段を備えた端末装置用のプログラムであって、
 コンピュータを、
 前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを外部装置に送信する送信手段、
として機能させるためのプログラム。
20. ファインダーに表示されている画像を撮像する撮像手段を備えた端末装置用のプログラムであって、
 コンピュータを、
 前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像を識別する情報とともに、前記画像を外部装置に送信する送信手段、
として機能させるためのプログラム。
21. 各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合システム用のプログラムであって、
 コンピュータを、
 保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する記憶手段、
 照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付手段、
 前記対応情報を参照し、前記入力受付手段が入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索手段、
 ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
 前記ファインダーに表示されている前記画像を撮像する撮像手段、
 前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識手段、
 前記対応情報検索手段が取得した前記識別情報と、前記画像認識手段が認識した前記識別情報とが一致するか否か判別する照合手段、
として機能させるためのプログラム。
21-2. 21に記載のプログラムにおいて、
 前記出力手段に、前記照合手段の判別結果を出力させるプログラム。
21-3. 21又は21-2に記載のプログラムにおいて、
 前記出力手段に、前記画像認識手段による認識結果を出力させるプログラム。
21-4. 21から21-3のいずれかに記載のプログラムにおいて、
 同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とするために、
 前記入力受付手段に、照合対象の複数の前記鋼材が保管されている前記番地及び複数の前記段情報の入力を受付けさせ、
 前記出力手段に、前記入力受付手段が入力を受付けた前記段情報の数と同数の複数の前記特定フレームを前記ファインダーに表示させるプログラム。
21-5. 21-4に記載のプログラムにおいて、
 複数の前記特定フレーム各々を、前記入力受付手段が入力を受付けた前記段情報各々と対応付け、
 前記出力手段に、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示させるプログラム。
21-6. 21-4又は21-5に記載のプログラムにおいて、
 前記対応情報検索手段に、前記入力受付手段が入力を受付けた前記番地に対応付けられている前記段情報を取得させ、
 前記出力手段に、前記対応情報検索手段が取得した前記段情報を一覧表示させ、
 前記入力受付手段に、前記一覧表示されている前記段情報の中の1つ又は複数を選択する入力を受付けさせることで、照合対象の前記鋼材の前記段情報の入力を受付けさせるプログラム。
21-7. 21から21-3のいずれかに記載のプログラムにおいて、
 同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とするために、
 前記対応情報検索手段に、前記入力受付手段が入力を受付けた前記番地に保管されている前記鋼材である第1の前記鋼材の数を、前記対応情報において当該番地に対応付けられている前記鋼材の前記識別情報の数を認識することで特定させ、
 前記出力手段に、前記第1の鋼材の数と同数の複数の前記特定フレームを前記ファインダーに表示させるプログラム。
21-8. 21-7に記載のプログラムにおいて、
 複数の前記特定フレーム各々を、前記第1の鋼材各々に対応付けられている前記段情報と対応付け、
 前記出力手段に、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示させるプログラム。
21-9. 21-8に記載のプログラムにおいて、
 前記入力受付手段に、前記ファインダーに表示されている複数の前記特定フレームの中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付けさせるプログラム。
21-10. 21-4から21-9のいずれかに記載のプログラムにおいて、
 前記コンピュータを、
 前記ファインダーに表示されている複数の前記特定フレームを、個別に、前記ファインダー内の表示位置、形状及び大きさの中の少なくとも1つを変更する手段として機能させるプログラム。
21-11. 21-4から21-10のいずれかに記載のプログラムにおいて、
 前記入力受付手段に、前記ファインダーに表示されている複数の前記特定フレームの中の一部を指定する指定入力と、一部の前記特定フレームが指定されている状態で撮像する撮像指示入力とを受付けさせ、
 前記撮像手段に、前記入力受付手段が受付けた前記撮像指示入力に従い撮像させ、
 前記画像認識手段に、前記撮像手段が撮像した前記画像の中の当該画像撮像時に指定されていた前記特定フレーム内の一部画像のみを用いて画像認識処理を行わせるプログラム。
21-12. 21-11に記載のプログラムにおいて、
 前記出力手段に、指定された状態で撮像されたことがある前記特定フレームと、指定された状態で撮像されたことがない前記特定フレームとを識別可能に表示させるプログラム。
22. 各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合方法であって、
 コンピュータが、
 保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶しておき、
 照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付ステップと、
 前記対応情報を参照し、前記入力受付ステップで入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索ステップと、
 ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力ステップと、
 前記ファインダーに表示されている前記画像を撮像する撮像ステップと、
 前記撮像ステップで撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識ステップと、
 前記対応情報検索ステップで取得した前記識別情報と、前記画像認識ステップで認識した前記識別情報とが一致するか否か判別する照合ステップと、
を実行する照合方法。
22-2. 22に記載の照合方法において、
 前記出力ステップでは、前記照合ステップでの判別結果を出力する照合方法。
22-3. 22又は22-2に記載の照合方法において、
 前記出力ステップでは、前記画像認識ステップでの認識結果を出力する照合方法。
22-4. 22から22-3のいずれかに記載の照合方法において、
 前記照合方法は同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とすることができ、
 前記入力受付ステップでは、照合対象の複数の前記鋼材が保管されている前記番地及び複数の前記段情報の入力を受付けることができ、
 前記出力ステップでは、前記入力受付ステップで入力を受付けた前記段情報の数と同数の複数の前記特定フレームを前記ファインダーに表示する照合方法。
22-5. 22-4に記載の照合方法において、
 複数の前記特定フレーム各々は、前記入力受付ステップで入力を受付けた前記段情報各々と対応付けられており、
 前記出力ステップでは、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示する照合方法。
22-6. 22-4又は22-5に記載の照合方法において、
 前記対応情報検索ステップでは、前記入力受付ステップで入力を受付けた前記番地に対応付けられている前記段情報を取得し、
 前記出力ステップでは、前記対応情報検索ステップで取得した前記段情報を一覧表示し、
 前記入力受付ステップでは、前記一覧表示されている前記段情報の中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付ける照合方法。
22-7. 22から22-3のいずれかに記載の照合方法において、
 前記照合方法は同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とすることができ、
 前記対応情報検索ステップでは、前記入力受付ステップで入力を受付けた前記番地に保管されている前記鋼材である第1の前記鋼材の数を、前記対応情報において当該番地に対応付けられている前記鋼材の前記識別情報の数を認識することで特定し、
 前記出力ステップでは、前記第1の鋼材の数と同数の複数の前記特定フレームを前記ファインダーに表示する照合方法。
22-8. 22-7に記載の照合方法において、
 複数の前記特定フレーム各々には、前記第1の鋼材各々に対応付けられている前記段情報が対応付けられており、
 前記出力ステップでは、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示する照合方法。
22-9. 22-8に記載の照合方法において、
 前記入力受付ステップでは、前記ファインダーに表示されている複数の前記特定フレームの中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付ける照合方法。
22-10. 22-4から22-9のいずれかに記載の照合方法において、
 前記ファインダーに表示されている複数の前記特定フレームは、個別に、前記ファインダー内の表示位置、形状及び大きさの中の少なくとも1つを変更することができる照合方法。
22-11. 22-4から22-10のいずれかに記載の照合方法において、
 前記入力受付ステップでは、前記ファインダーに表示されている複数の前記特定フレームの中の一部を指定する指定入力と、一部の前記特定フレームが指定されている状態で撮像する撮像指示入力とを受付け、
 前記撮像ステップでは、前記入力受付ステップで受付けた前記撮像指示入力に従い撮像し、
 前記画像認識ステップでは、前記撮像ステップで撮像した前記画像の中の当該画像撮像時に指定されていた前記特定フレーム内の一部画像のみを用いて画像認識処理を行う照合方法。
22-12. 22-11に記載の照合方法において、
 前記出力ステップでは、指定された状態で撮像されたことがある前記特定フレームと、指定された状態で撮像されたことがない前記特定フレームとを識別可能に表示する照合方法。
<Appendix>
Hereinafter, examples of the reference form will be added.
1. A collation system for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned an address,
The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Storage means for storing correspondence information;
Input accepting means for accepting input of the address of the steel material to be verified and the step information;
Corresponding information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input receiving means;
A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
Imaging means for imaging an image displayed on the viewfinder;
Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark;
Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means;
A collation system.
2. In the verification system according to 1,
The collating system in which the output means outputs a discrimination result of the collating means.
3. In the verification system according to 1 or 2,
The output unit is a collation system that outputs a recognition result by the image recognition unit.
4). In the verification system according to any one of 1 to 3,
The verification system can target a plurality of the steel materials stored at the same address at a time,
The input accepting means can accept input of the address and a plurality of the step information in which a plurality of the steel materials to be collated are stored,
The collating system in which the output means displays a plurality of the specific frames on the finder in the same number as the number of the stage information accepted by the input accepting means.
5. In the verification system according to 4,
Each of the plurality of specific frames is associated with each of the stage information received by the input receiving unit,
The collation system, wherein the output means displays a plurality of the specific frames so that the associated stage information can be identified.
6). In the verification system according to 4 or 5,
The correspondence information search means acquires the stage information associated with the address at which the input reception means has received an input,
The output means displays a list of the stage information acquired by the correspondence information search means,
The said input reception means is a collation system which receives the input of the said step information of the said steel materials of collation object by receiving the input which selects one or more in the said step information currently displayed by the list.
7). In the verification system according to any one of 1 to 3,
The verification system can target a plurality of the steel materials stored at the same address at a time,
The correspondence information search means is configured to associate the number of the first steel materials that are the steel materials stored at the address that the input reception means has accepted with the address in the correspondence information. By recognizing the number of the identification information of
The said output means is the collation system which displays the said specific frame of the same number as the number of said 1st steel materials on the said finder.
8). In the verification system according to 7,
Each of the specific frames is associated with the step information associated with each of the first steel materials,
The collation system, wherein the output means displays a plurality of the specific frames so that the associated stage information can be identified.
9. In the verification system according to 8,
The said input reception means is a collation system which accepts the input of the said step information of the said steel materials of collation object by receiving the input which selects one or more in the said specific frame currently displayed on the said finder.
10. In the verification system according to any one of 4 to 9,
The collation system which can change at least one of the display position, shape, and size in the finder individually for the plurality of specific frames displayed on the finder.
11. In the verification system according to any one of 4 to 10,
The input receiving means includes a designation input for designating a part of the plurality of specific frames displayed on the finder, and an imaging instruction input for imaging in a state where some of the specific frames are designated. Accept,
The imaging unit images in accordance with the imaging instruction input received by the input receiving unit,
The collation system in which the image recognition means performs an image recognition process using only a partial image within the specific frame designated at the time of image capture of the image captured by the image capture means.
12 In the verification system according to 11,
The output unit displays the specific frame that has been imaged in a specified state and the specific frame that has not been imaged in a specified state in a distinguishable manner.
13. In the verification system according to any one of 1 to 12,
The verification system includes a terminal device configured to be able to communicate with each other, and a server device,
The terminal device includes the input receiving unit, the output unit, and the imaging unit.
The server device includes the storage unit, the correspondence information search unit, and the collation unit,
A verification system in which either the terminal device or the server device includes the image recognition means.
14 A terminal device comprising the input receiving unit, the output unit, and the imaging unit included in the collation system according to any one of 1 to 12.
15. A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
Imaging means for capturing the image displayed on the viewfinder;
And a transmission unit configured to transmit only a partial image within the specific frame in the image captured by the imaging unit to an external device.
16. A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
Imaging means for capturing the image displayed on the viewfinder;
A terminal device comprising: transmission means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means.
17. A server apparatus comprising: the storage unit included in the verification system according to any one of 1 to 12, the correspondence information search unit, and the verification unit.
18. The server apparatus according to claim 17, further comprising the image recognition means included in the collation system according to any one of 1 to 12.
19. A program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder,
Computer
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder. Output means,
Transmitting means for transmitting only a partial image within the specific frame of the image captured by the imaging means to an external device;
Program to function as.
20. A program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder,
Computer
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder. Output means,
Transmitting means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means;
Program to function as.
21. A program for a collation system that collates a plurality of steel materials that are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address,
Computer
The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Storage means for storing correspondence information;
An input receiving means for receiving the address of the steel material to be verified and the input of the step information;
Correspondence information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input reception means;
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image. Output means,
Imaging means for capturing the image displayed on the viewfinder;
Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark;
Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means;
Program to function as.
21-2. 21. In the program described in 21,
A program for causing the output means to output a discrimination result of the collating means.
21-3. In the program described in 21 or 21-2,
A program for causing the output means to output a recognition result by the image recognition means.
21-4. In the program according to any one of 21 to 21-3,
In order to target a plurality of the steel materials stored in the same address at the same time,
The input receiving means is configured to accept input of the address and a plurality of the step information in which a plurality of the steel materials to be verified are stored,
A program for causing the output unit to display the plurality of specific frames on the finder in the same number as the number of the stage information received by the input receiving unit.
21-5. In the program described in 21-4,
Associating each of the plurality of specific frames with each of the stage information received by the input receiving means;
A program that causes the output means to display a plurality of the specific frames so that the associated stage information can be identified.
21-6. In the program described in 21-4 or 21-5,
Causing the correspondence information search means to acquire the stage information associated with the address at which the input reception means has received an input;
The output means displays a list of the stage information acquired by the correspondence information search means,
A program for causing the input receiving means to accept an input for selecting one or more of the step information displayed in the list, thereby accepting an input of the step information of the steel material to be verified.
21-7. In the program according to any one of 21 to 21-3,
In order to target a plurality of the steel materials stored in the same address at the same time,
In the correspondence information search means, the number of the first steel materials that are the steel materials stored at the address that the input reception means has accepted is associated with the address in the correspondence information. By recognizing the number of the identification information of
A program for causing the output means to display a plurality of the specific frames equal to the number of the first steel materials on the finder.
21-8. In the program described in 21-7,
Each of the specific frames is associated with the step information associated with each of the first steel materials,
A program that causes the output means to display a plurality of the specific frames so that the associated stage information can be identified.
21-9. In the program described in 21-8,
The program which makes the said input reception means accept the input of the said step information of the said steel materials of collation object by accepting the input which selects one or more in the said specific frame currently displayed on the said finder.
21-10. In the program described in any one of 21-4 to 21-9,
The computer,
A program for causing a plurality of the specific frames displayed on the finder to individually function as means for changing at least one of a display position, a shape, and a size in the finder.
21-11. In the program described in any one of 21-4 to 21-10,
A designation input for designating a part of the plurality of specific frames displayed on the finder and an imaging instruction input for imaging in a state in which some of the specific frames are designated to the input reception unit. Accept,
Causing the imaging means to take an image in accordance with the imaging instruction input received by the input receiving means;
A program for causing the image recognition means to perform image recognition processing using only a partial image within the specific frame designated at the time of image pickup of the image picked up by the image pickup means.
21-12. In the program described in 21-11,
A program for causing the output means to display the specific frame that has been imaged in a specified state and the specific frame that has not been imaged in a specified state in an identifiable manner.
22. A collation method for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned with an address,
Computer
The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Remember the correspondence information,
An input receiving step of receiving input of the address of the steel material to be verified and the step information;
A correspondence information search step of referring to the correspondence information and acquiring the identification information associated with the address and the step information received in the input acceptance step;
A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image. An output step;
An imaging step of imaging the image displayed on the viewfinder;
Image recognition processing is performed using only a partial image in the specific frame in the image captured in the imaging step, and identification marks written on the surfaces of the plurality of steel materials are extracted and extracted. An image recognition step for recognizing the identification information using the identification mark;
A collation step for determining whether or not the identification information acquired in the correspondence information search step matches the identification information recognized in the image recognition step;
The matching method to execute.
22-2. In the verification method according to 22,
In the output step, a collation method for outputting a discrimination result in the collation step.
22-3. In the verification method described in 22 or 22-2,
In the output step, a collation method for outputting a recognition result in the image recognition step.
22-4. In the verification method according to any one of 22 to 22-3,
The verification method can target a plurality of the steel materials stored at the same address at a time,
In the input receiving step, it is possible to receive input of the address and a plurality of the step information in which a plurality of the steel materials to be verified are stored,
In the output step, a collation method for displaying the plurality of specific frames on the finder in the same number as the number of the step information received in the input receiving step.
22-5. In the collation method described in 22-4,
Each of the plurality of specific frames is associated with each of the step information received in the input receiving step,
In the output step, a plurality of the specific frames are displayed so that the associated step information can be identified.
22-6. In the verification method described in 22-4 or 22-5,
In the correspondence information search step, the step information associated with the address that has received the input in the input reception step is acquired,
In the output step, the stage information acquired in the correspondence information search step is displayed as a list,
In the input receiving step, a verification method for receiving input of the stage information of the steel material to be verified by receiving an input for selecting one or more of the stage information displayed in the list.
22-7. In the verification method according to any one of 22 to 22-3,
The verification method can target a plurality of the steel materials stored at the same address at a time,
In the correspondence information search step, the number of the first steel material that is the steel material stored in the address that has received the input in the input reception step is associated with the address in the correspondence information. By recognizing the number of the identification information of
In the output step, a matching method for displaying a plurality of the specific frames of the same number as the number of the first steel materials on the finder.
22-8. In the matching method described in 22-7,
Each of the specific frames is associated with the step information associated with each of the first steel materials,
In the output step, a plurality of the specific frames are displayed so that the associated step information can be identified.
22-9. In the matching method described in 22-8,
In the input receiving step, a collation method for accepting an input of the step information of the steel material to be collated by receiving an input for selecting one or a plurality of the specific frames displayed on the finder.
22-10. In the matching method according to any one of 22-4 to 22-9,
A collation method in which a plurality of the specific frames displayed on the finder can individually change at least one of a display position, a shape, and a size in the finder.
22-11. In the verification method according to any one of 22-4 to 22-10,
In the input receiving step, a designation input for designating a part of the plurality of specific frames displayed on the finder, and an imaging instruction input for imaging in a state where some of the specific frames are designated Accept,
In the imaging step, imaging is performed according to the imaging instruction input received in the input reception step,
In the image recognition step, a collation method in which image recognition processing is performed using only a partial image within the specific frame specified at the time of image capturing in the image captured in the image capturing step.
22-12. In the collation method described in 22-11,
In the output step, the specific frame that has been imaged in the designated state and the specific frame that has not been imaged in the designated state are displayed in a distinguishable manner.
 この出願は、2013年3月13日に出願された日本出願特願2013-050843号を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2013-050843 filed on Mar. 13, 2013, the entire disclosure of which is incorporated herein.

Claims (22)

  1.  各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合システムであって、
     保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する記憶手段と、
     照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付手段と、
     前記対応情報を参照し、前記入力受付手段が入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索手段と、
     ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
     前記ファインダーに表示されている画像を撮像する撮像手段と、
     前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識手段と、
     前記対応情報検索手段が取得した前記識別情報と、前記画像認識手段が認識した前記識別情報とが一致するか否か判別する照合手段と、
    を有する照合システム。
    A collation system for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned an address,
    The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Storage means for storing correspondence information;
    Input accepting means for accepting input of the address of the steel material to be verified and the step information;
    Corresponding information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input receiving means;
    A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
    Imaging means for imaging an image displayed on the viewfinder;
    Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark;
    Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means;
    A collation system.
  2.  請求項1に記載の照合システムにおいて、
     前記出力手段は、前記照合手段の判別結果を出力する照合システム。
    The verification system according to claim 1,
    The collating system in which the output means outputs a discrimination result of the collating means.
  3.  請求項1又は2に記載の照合システムにおいて、
     前記出力手段は、前記画像認識手段による認識結果を出力する照合システム。
    The verification system according to claim 1 or 2,
    The output unit is a collation system that outputs a recognition result by the image recognition unit.
  4.  請求項1から3のいずれか1項に記載の照合システムにおいて、
     前記照合システムは同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とすることができ、
     前記入力受付手段は、照合対象の複数の前記鋼材が保管されている前記番地及び複数の前記段情報の入力を受付けることができ、
     前記出力手段は、前記入力受付手段が入力を受付けた前記段情報の数と同数の複数の前記特定フレームを前記ファインダーに表示する照合システム。
    In the collation system according to any one of claims 1 to 3,
    The verification system can target a plurality of the steel materials stored at the same address at a time,
    The input accepting means can accept input of the address and a plurality of the step information in which a plurality of the steel materials to be collated are stored,
    The collating system in which the output means displays a plurality of the specific frames on the finder in the same number as the number of the stage information accepted by the input accepting means.
  5.  請求項4に記載の照合システムにおいて、
     複数の前記特定フレーム各々は、前記入力受付手段が入力を受付けた前記段情報各々と対応付けられており、
     前記出力手段は、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示する照合システム。
    The verification system according to claim 4,
    Each of the plurality of specific frames is associated with each of the stage information received by the input receiving unit,
    The collation system, wherein the output means displays a plurality of the specific frames so that the associated stage information can be identified.
  6.  請求項4又は5に記載の照合システムにおいて、
     前記対応情報検索手段は、前記入力受付手段が入力を受付けた前記番地に対応付けられている前記段情報を取得し、
     前記出力手段は、前記対応情報検索手段が取得した前記段情報を一覧表示し、
     前記入力受付手段は、前記一覧表示されている前記段情報の中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付ける照合システム。
    In the collation system according to claim 4 or 5,
    The correspondence information search means acquires the stage information associated with the address at which the input reception means has received an input,
    The output means displays a list of the stage information acquired by the correspondence information search means,
    The said input reception means is a collation system which receives the input of the said step information of the said steel materials of collation object by receiving the input which selects one or more in the said step information currently displayed by the list.
  7.  請求項1から3のいずれか1項に記載の照合システムにおいて、
     前記照合システムは同一の前記番地に保管されている複数の前記鋼材を一度に照合対象とすることができ、
     前記対応情報検索手段は、前記入力受付手段が入力を受付けた前記番地に保管されている前記鋼材である第1の前記鋼材の数を、前記対応情報において当該番地に対応付けられている前記鋼材の前記識別情報の数を認識することで特定し、
     前記出力手段は、前記第1の鋼材の数と同数の複数の前記特定フレームを前記ファインダーに表示する照合システム。
    In the collation system according to any one of claims 1 to 3,
    The verification system can target a plurality of the steel materials stored at the same address at a time,
    The correspondence information search means is configured to associate the number of the first steel materials that are the steel materials stored at the address that the input reception means has accepted with the address in the correspondence information. By recognizing the number of the identification information of
    The said output means is the collation system which displays the said specific frame of the same number as the number of said 1st steel materials on the said finder.
  8.  請求項7に記載の照合システムにおいて、
     複数の前記特定フレーム各々には、前記第1の鋼材各々に対応付けられている前記段情報が対応付けられており、
     前記出力手段は、対応付けられている前記段情報が識別できるように、複数の前記特定フレームを表示する照合システム。
    The verification system according to claim 7,
    Each of the specific frames is associated with the step information associated with each of the first steel materials,
    The collation system, wherein the output means displays a plurality of the specific frames so that the associated stage information can be identified.
  9.  請求項8に記載の照合システムにおいて、
     前記入力受付手段は、前記ファインダーに表示されている複数の前記特定フレームの中の1つ又は複数を選択する入力を受付けることで、照合対象の前記鋼材の前記段情報の入力を受付ける照合システム。
    The verification system according to claim 8,
    The said input reception means is a collation system which accepts the input of the said step information of the said steel materials of collation object by receiving the input which selects one or more in the said specific frame currently displayed on the said finder.
  10.  請求項4から9のいずれか1項に記載の照合システムにおいて、
     前記ファインダーに表示されている複数の前記特定フレームは、個別に、前記ファインダー内の表示位置、形状及び大きさの中の少なくとも1つを変更することができる照合システム。
    The verification system according to any one of claims 4 to 9,
    The collation system which can change at least one of the display position, shape, and size in the finder individually for the plurality of specific frames displayed on the finder.
  11.  請求項4から10のいずれか1項に記載の照合システムにおいて、
     前記入力受付手段は、前記ファインダーに表示されている複数の前記特定フレームの中の一部を指定する指定入力と、一部の前記特定フレームが指定されている状態で撮像する撮像指示入力とを受付け、
     前記撮像手段は、前記入力受付手段が受付けた前記撮像指示入力に従い撮像し、
     前記画像認識手段は、前記撮像手段が撮像した前記画像の中の当該画像撮像時に指定されていた前記特定フレーム内の一部画像のみを用いて画像認識処理を行う照合システム。
    The collation system according to any one of claims 4 to 10,
    The input receiving means includes a designation input for designating a part of the plurality of specific frames displayed on the finder, and an imaging instruction input for imaging in a state where some of the specific frames are designated. Accept,
    The imaging unit images in accordance with the imaging instruction input received by the input receiving unit,
    The collation system in which the image recognition means performs an image recognition process using only a partial image within the specific frame designated at the time of image capture of the image captured by the image capture means.
  12.  請求項11に記載の照合システムにおいて、
     前記出力手段は、指定された状態で撮像されたことがある前記特定フレームと、指定された状態で撮像されたことがない前記特定フレームとを識別可能に表示する照合システム。
    The verification system according to claim 11,
    The output unit displays the specific frame that has been imaged in a specified state and the specific frame that has not been imaged in a specified state in a distinguishable manner.
  13.  請求項1から12のいずれか1項に記載の照合システムにおいて、
     前記照合システムは、互いに通信可能に構成された端末装置と、サーバ装置とを有し、
     前記端末装置は、前記入力受付手段と、前記出力手段と、前記撮像手段とを有し、
     前記サーバ装置は、前記記憶手段と、前記対応情報検索手段と、前記照合手段とを有し、
     前記端末装置及び前記サーバ装置のいずれかが、前記画像認識手段を備える照合システム。
    The verification system according to any one of claims 1 to 12,
    The verification system includes a terminal device configured to be able to communicate with each other, and a server device,
    The terminal device includes the input receiving unit, the output unit, and the imaging unit.
    The server device includes the storage unit, the correspondence information search unit, and the collation unit,
    A verification system in which either the terminal device or the server device includes the image recognition means.
  14.  請求項1から12のいずれか1項に記載の照合システムが有する前記入力受付手段と、前記撮像手段と、前記出力手段とを備える端末装置。 A terminal device comprising the input receiving means, the imaging means, and the output means included in the collation system according to any one of claims 1 to 12.
  15.  ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
     前記ファインダーに表示されている前記画像を撮像する撮像手段と、
     前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを外部装置に送信する送信手段と、を有する端末装置。
    A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
    Imaging means for capturing the image displayed on the viewfinder;
    And a transmission unit configured to transmit only a partial image within the specific frame in the image captured by the imaging unit to an external device.
  16.  ファインダーを有し、前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段と、
     前記ファインダーに表示されている前記画像を撮像する撮像手段と、
     前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像を識別する情報とともに、前記画像を外部装置に送信する送信手段と、を有する端末装置。
    A viewfinder, displaying a pre-imaged image and / or an imaged image on the viewfinder, and overlaying the image with a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image Output means for displaying on the viewfinder,
    Imaging means for capturing the image displayed on the viewfinder;
    A terminal device comprising: transmission means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means.
  17.  請求項1から12のいずれか1項に記載の照合システムが有する前記記憶手段と、前記対応情報検索手段と、前記照合手段とを備えるサーバ装置。 A server device comprising: the storage unit included in the verification system according to any one of claims 1 to 12, the correspondence information search unit, and the verification unit.
  18.  請求項17に記載のサーバ装置において、請求項1から12のいずれか1項に記載の照合システムが有する前記画像認識手段をさらに備えるサーバ装置。 The server apparatus according to claim 17, further comprising the image recognition means included in the collation system according to any one of claims 1 to 12.
  19.  ファインダーに表示されている画像を撮像する撮像手段を備えた端末装置用のプログラムであって、
     コンピュータを、
     前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
     前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを外部装置に送信する送信手段、
    として機能させるためのプログラム。
    A program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder,
    Computer
    A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder. Output means,
    Transmitting means for transmitting only a partial image within the specific frame of the image captured by the imaging means to an external device;
    Program to function as.
  20.  ファインダーに表示されている画像を撮像する撮像手段を備えた端末装置用のプログラムであって、
     コンピュータを、
     前記ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
     前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像を識別する情報とともに、前記画像を外部装置に送信する送信手段、
    として機能させるためのプログラム。
    A program for a terminal device provided with an imaging means for capturing an image displayed on a viewfinder,
    Computer
    A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is superimposed on the image and displayed on the finder. Output means,
    Transmitting means for transmitting the image to an external device together with information for identifying a partial image in the specific frame in the image captured by the imaging means;
    Program to function as.
  21.  各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合システム用のプログラムであって、
     コンピュータを、
     保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶する記憶手段、
     照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付手段、
     前記対応情報を参照し、前記入力受付手段が入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索手段、
     ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力手段、
     前記ファインダーに表示されている前記画像を撮像する撮像手段、
     前記撮像手段が撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識手段、
     前記対応情報検索手段が取得した前記識別情報と、前記画像認識手段が認識した前記識別情報とが一致するか否か判別する照合手段、
    として機能させるためのプログラム。
    A program for a collation system that collates a plurality of steel materials that are stacked and stored in a plurality of stages in each of a plurality of areas each assigned an address,
    Computer
    The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Storage means for storing correspondence information;
    An input receiving means for receiving the address of the steel material to be verified and the input of the step information;
    Correspondence information search means for referring to the correspondence information and acquiring the identification information associated with the address and the stage information received by the input reception means;
    A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image. Output means,
    Imaging means for capturing the image displayed on the viewfinder;
    Image recognition processing is performed using only a partial image in the specific frame in the image picked up by the image pickup means, and an identification mark written on the surface of each of the plurality of steel materials is extracted and extracted. Image recognition means for recognizing the identification information using the identification mark;
    Collation means for determining whether or not the identification information acquired by the correspondence information search means matches the identification information recognized by the image recognition means;
    Program to function as.
  22.  各々番地が割り振られた複数のエリア各々に複数段に積み重ねて保管される複数の鋼材の照合を行う照合方法であって、
     コンピュータが、
     保管されている複数の前記鋼材各々の識別情報と、前記鋼材各々が保管されている前記エリアの前記番地と、複数段に積み重ねられた鋼材群の中の位置を示す段情報とを対応付けた対応情報を記憶しておき、
     照合対象の前記鋼材の前記番地及び前記段情報の入力を受付ける入力受付ステップと、
     前記対応情報を参照し、前記入力受付ステップで入力を受付けた前記番地及び前記段情報に対応付けられている前記識別情報を取得する対応情報検索ステップと、
     ファインダーに撮像前及び/又は撮像済みの画像を表示するとともに、表示されている前記画像の中の画像認識処理の対象となる一部領域を示す特定フレームを前記画像に重ねて前記ファインダーに表示する出力ステップと、
     前記ファインダーに表示されている前記画像を撮像する撮像ステップと、
     前記撮像ステップで撮像した前記画像の中の前記特定フレーム内の一部画像のみを用いて画像認識処理を行い、複数の前記鋼材各々の表面に記されている識別マークを抽出するとともに、抽出した前記識別マークを用いて前記識別情報を認識する画像認識ステップと、
     前記対応情報検索ステップで取得した前記識別情報と、前記画像認識ステップで認識した前記識別情報とが一致するか否か判別する照合ステップと、
    を実行する照合方法。
    A collation method for collating a plurality of steel materials that are stacked and stored in each of a plurality of areas each assigned with an address,
    Computer
    The identification information of each of the plurality of steel materials stored, the address of the area in which each of the steel materials is stored, and stage information indicating the position in the steel material group stacked in a plurality of stages are associated with each other. Remember the correspondence information,
    An input receiving step of receiving input of the address of the steel material to be verified and the step information;
    A correspondence information search step of referring to the correspondence information and acquiring the identification information associated with the address and the step information received in the input acceptance step;
    A pre-imaged and / or already-captured image is displayed on the finder, and a specific frame indicating a partial area to be subjected to image recognition processing in the displayed image is displayed on the finder so as to overlap the image. An output step;
    An imaging step of imaging the image displayed on the viewfinder;
    Image recognition processing is performed using only a partial image in the specific frame in the image captured in the imaging step, and identification marks written on the surfaces of the plurality of steel materials are extracted and extracted. An image recognition step for recognizing the identification information using the identification mark;
    A collation step for determining whether or not the identification information acquired in the correspondence information search step matches the identification information recognized in the image recognition step;
    The matching method to execute.
PCT/JP2013/080637 2013-03-13 2013-11-13 Comparison system, terminal device, server device, comparison method, and program WO2014141534A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201380074556.9A CN105008251B (en) 2013-03-13 2013-11-13 Check system, terminal installation, server unit and checking method
JP2015505229A JP6123881B2 (en) 2013-03-13 2013-11-13 Verification system, terminal device, server device, verification method and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-050843 2013-03-13
JP2013050843 2013-03-13

Publications (1)

Publication Number Publication Date
WO2014141534A1 true WO2014141534A1 (en) 2014-09-18

Family

ID=51536221

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/080637 WO2014141534A1 (en) 2013-03-13 2013-11-13 Comparison system, terminal device, server device, comparison method, and program

Country Status (3)

Country Link
JP (1) JP6123881B2 (en)
CN (1) CN105008251B (en)
WO (1) WO2014141534A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018150137A (en) * 2017-03-13 2018-09-27 日本電気株式会社 Article management system, article management method, and article management program
CN110597165A (en) * 2019-08-30 2019-12-20 三明学院 Steel piling monitoring system and steel piling monitoring method
WO2022114173A1 (en) * 2020-11-30 2022-06-02 日本製鉄株式会社 Tracking device, tracking method, data structure of tracking data, and program
JP7464460B2 (en) 2020-06-22 2024-04-09 日本電気通信システム株式会社 Information processing device, distribution status detection system, distribution status detection method, and computer program

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6262809B2 (en) * 2016-06-28 2018-01-17 新日鉄住金ソリューションズ株式会社 System, information processing apparatus, information processing method, and program
JP2018056253A (en) * 2016-09-28 2018-04-05 パナソニックIpマネジメント株式会社 Component management support system and component management support method
CN110573980B (en) * 2019-07-25 2020-11-06 灵动科技(北京)有限公司 Autopilot system with RFID reader and built-in printer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960413A (en) * 1996-03-05 1999-09-28 Amon; James A. Portable system for inventory identification and classification
JP2007326700A (en) * 2006-06-09 2007-12-20 Nippon Steel Corp Steel product management method and management system
JP2008265909A (en) * 2007-04-18 2008-11-06 Hitachi-Ge Nuclear Energy Ltd Material storage position management system, and its method
JP2012144371A (en) * 2010-12-24 2012-08-02 Jfe Steel Corp Article management method
WO2013005445A1 (en) * 2011-07-06 2013-01-10 株式会社インスピーディア Stock collection system and stock collection method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1450549B1 (en) * 2003-02-18 2011-05-04 Canon Kabushiki Kaisha Photographing apparatus with radio information acquisition means and control method therefor
EP1452997B1 (en) * 2003-02-25 2010-09-15 Canon Kabushiki Kaisha Apparatus and method for managing articles
US7290701B2 (en) * 2004-08-13 2007-11-06 Accu-Assembly Incorporated Gathering data relating to electrical components picked from stacked trays
KR100754656B1 (en) * 2005-06-20 2007-09-03 삼성전자주식회사 Method and system for providing user with image related information and mobile communication system
JP2011090662A (en) * 2009-09-25 2011-05-06 Dainippon Printing Co Ltd Business form reception system, cellular phone, server, program and duplicate business form
CN101853387A (en) * 2010-04-02 2010-10-06 北京物资学院 Stereoscopic warehouse goods checking method and system
JP2011221860A (en) * 2010-04-12 2011-11-04 Sanyo Special Steel Co Ltd Steel material identification system and method thereof
JP2012074804A (en) * 2010-09-28 2012-04-12 Promise Co Ltd Camera for photographing certificate, financing examination device and financing examination method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5960413A (en) * 1996-03-05 1999-09-28 Amon; James A. Portable system for inventory identification and classification
JP2007326700A (en) * 2006-06-09 2007-12-20 Nippon Steel Corp Steel product management method and management system
JP2008265909A (en) * 2007-04-18 2008-11-06 Hitachi-Ge Nuclear Energy Ltd Material storage position management system, and its method
JP2012144371A (en) * 2010-12-24 2012-08-02 Jfe Steel Corp Article management method
WO2013005445A1 (en) * 2011-07-06 2013-01-10 株式会社インスピーディア Stock collection system and stock collection method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018150137A (en) * 2017-03-13 2018-09-27 日本電気株式会社 Article management system, article management method, and article management program
CN110597165A (en) * 2019-08-30 2019-12-20 三明学院 Steel piling monitoring system and steel piling monitoring method
JP7464460B2 (en) 2020-06-22 2024-04-09 日本電気通信システム株式会社 Information processing device, distribution status detection system, distribution status detection method, and computer program
WO2022114173A1 (en) * 2020-11-30 2022-06-02 日本製鉄株式会社 Tracking device, tracking method, data structure of tracking data, and program
JPWO2022114173A1 (en) * 2020-11-30 2022-06-02
JP7288231B2 (en) 2020-11-30 2023-06-07 日本製鉄株式会社 Tracking device, tracking method and program
KR20230082662A (en) * 2020-11-30 2023-06-08 닛폰세이테츠 가부시키가이샤 Tracking device, tracking method, data structure and program of tracking data
KR102595542B1 (en) 2020-11-30 2023-10-30 닛폰세이테츠 가부시키가이샤 Tracking device, tracking method, data structure and program of tracking data

Also Published As

Publication number Publication date
JP6123881B2 (en) 2017-05-10
CN105008251B (en) 2017-10-31
JPWO2014141534A1 (en) 2017-02-16
CN105008251A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
JP6123881B2 (en) Verification system, terminal device, server device, verification method and program
JP5083395B2 (en) Information reading apparatus and program
JP6527410B2 (en) Character recognition device, character recognition method, and program
WO2018016214A1 (en) Image processing device, image processing method, and program
JP6826293B2 (en) Information information system and its processing method and program
JP6712045B2 (en) Information processing system, its processing method, and program
US20160269586A1 (en) System, control method, and recording medium
JP5454639B2 (en) Image processing apparatus and program
JP5534207B2 (en) Information reading apparatus and program
JP5130081B2 (en) Control device and image data display method
WO2019181441A1 (en) Information processing device, control method, program
JP2008197674A (en) Support information providing method, support information providing program and information providing management system
JP2017097859A (en) Information processing device, and processing method and program thereof
CN102496010A (en) Method for recognizing business cards by combining preview images and photographed images
JP6708935B2 (en) Information processing apparatus, processing method thereof, and program
KR102273198B1 (en) Method and device for recognizing visually coded patterns
WO2021033310A1 (en) Processing device, processing method, and program
JPWO2018179223A1 (en) Remote work support system, remote work support method and program
CN114611475A (en) Information processing apparatus, information processing method, and computer readable medium
JP2017091252A (en) Information input device and information input program
JP6249025B2 (en) Image processing apparatus and program
JP6875061B2 (en) A recording medium for recording an image judgment system, an image judgment method, an image judgment program, and an image judgment program.
US11462014B2 (en) Information processing apparatus and non-transitory computer readable medium
JP6582875B2 (en) Inspection processing apparatus, inspection system, inspection processing method and program
CN115390967A (en) Screen capture auditing method, device, equipment and storage medium in version deployment process

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13877730

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015505229

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13877730

Country of ref document: EP

Kind code of ref document: A1