WO2018167974A1 - Image processing device, control method, and control program - Google Patents

Image processing device, control method, and control program Download PDF

Info

Publication number
WO2018167974A1
WO2018167974A1 PCT/JP2017/011039 JP2017011039W WO2018167974A1 WO 2018167974 A1 WO2018167974 A1 WO 2018167974A1 JP 2017011039 W JP2017011039 W JP 2017011039W WO 2018167974 A1 WO2018167974 A1 WO 2018167974A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
numerical value
meter
digit
partial area
Prior art date
Application number
PCT/JP2017/011039
Other languages
French (fr)
Japanese (ja)
Inventor
雄毅 笠原
Original Assignee
株式会社Pfu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Pfu filed Critical 株式会社Pfu
Priority to JP2019505670A priority Critical patent/JP6707178B2/en
Priority to PCT/JP2017/011039 priority patent/WO2018167974A1/en
Publication of WO2018167974A1 publication Critical patent/WO2018167974A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns

Definitions

  • the present disclosure relates to an image processing device, a control method, and a control program, and more particularly, to an image processing device, a control method, and a control program for recognizing a numerical value in a meter from an image captured by the meter.
  • An instrument reading device that captures a scene including a numerical value indicated by a digital instrument, processes the captured image, and detects a numerical value indicated by the instrument (see Patent Document 1).
  • the purpose of the image processing apparatus, control method, and control program is to enable efficient storage of appropriate images as evidence images.
  • An image processing apparatus specifies a storage unit, an imaging unit that sequentially generates an input image obtained by capturing a meter, and a numerical value in the meter that is reflected in each of the sequentially generated input images.
  • the numerical value recognition unit for recognizing the numerical value in the meter based on the counting result, and whether or not the partial area corresponding to each digit is clear for each digit corresponding to each digit of the numerical value in the meter. For each partial region to be determined, the determination unit selects an input image for which the partial region is determined to be clear, and associates the numerical value recognized by the numerical value recognition unit with at least a part of the selected input image as an evidence image And a control unit stored in the storage unit.
  • a control method is a control method for an image processing apparatus that includes a storage unit and an imaging unit that sequentially generates input images obtained by capturing a meter.
  • the numerical values in the meter reflected in the meter are identified and counted, and the numerical values in the meter are recognized based on the counting results, and each partial area is clear for each partial area corresponding to each digit in the meter.
  • a numerical value that determines whether or not there is an image, and for each digit, selects an input image for which the partial area corresponding to each digit is determined to be clear, and recognizes at least part of the selected input image as an evidence image Are stored in the storage unit in association with each other.
  • a control program is a control program for an image processing apparatus that includes a storage unit and an imaging unit that sequentially generates input images obtained by photographing a meter.
  • the numerical values in the meter reflected in the meter are identified and counted, and the numerical values in the meter are recognized based on the counting results, and each partial area is clear for each partial area corresponding to each digit in the meter.
  • a numerical value that determines whether or not there is an image, and for each digit, selects an input image for which the partial area corresponding to each digit is determined to be clear, and recognizes at least part of the selected input image as an evidence image Are associated with each other and stored in the storage unit.
  • the image processing apparatus, the control method, and the control program can efficiently store an appropriate image as an evidence image.
  • FIG. 2 is a diagram illustrating a schematic configuration of a storage device 110 and a CPU 120.
  • FIG. It is a flowchart which shows the example of operation
  • 6 is a diagram showing a schematic configuration of another processing circuit 230.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an image processing apparatus 100 according to the embodiment.
  • the image processing apparatus 100 is a portable information processing apparatus such as a tablet PC, a multi-function mobile phone (so-called smart phone), a portable information terminal, and a notebook PC, and is used by an operator who is the user.
  • the image processing apparatus 100 includes a communication device 101, an input device 102, a display device 103, an imaging device 104, a storage device 110, a CPU (Central Processing Unit) 120, and a processing circuit 130.
  • a communication device 101 an input device 102, a display device 103, an imaging device 104, a storage device 110, a CPU (Central Processing Unit) 120, and a processing circuit 130.
  • CPU Central Processing Unit
  • the communication device 101 includes a communication interface circuit including an antenna mainly having a 2.4 GHz band, a 5 GHz band, or the like as a sensitive band.
  • the communication apparatus 101 performs wireless communication with an access point or the like based on an IEEE (The Institute of Electrical and Electronics Electronics, Inc.) 802.11 standard wireless communication system.
  • the communication device 101 transmits / receives data to / from an external server device (not shown) via an access point.
  • the communication apparatus 101 supplies the data received from the server apparatus via the access point to the CPU 120, and transmits the data supplied from the CPU 120 to the server apparatus via the access point.
  • the communication device 101 may be any device that can communicate with an external device.
  • the communication device 101 may communicate with a server device via a base station device (not shown) according to a mobile phone communication method, or may communicate with a server device according to a wired LAN communication method.
  • the input device 102 has a touch panel type input device, an input device such as a keyboard and a mouse, and an interface circuit that acquires signals from the input device.
  • the input device 102 receives a user input and outputs a signal corresponding to the user input to the CPU 120.
  • the display device 103 includes a display composed of liquid crystal, organic EL (Electro-Luminescence), and the like, and an interface circuit that outputs image data or various information to the display.
  • the display device 103 is connected to the CPU 120 and displays the image data output from the CPU 120 on a display. Note that the input device 102 and the display device 103 may be integrally configured using a touch panel display.
  • the imaging device 104 includes a reduction optical system type imaging sensor including an imaging element made up of a CCD (Charge Coupled Device) arranged one-dimensionally or two-dimensionally, and an A / D converter.
  • the imaging device 104 is an example of an imaging unit, and sequentially captures a meter according to an instruction from the CPU 120 (for example, 30 frames / second).
  • the image sensor generates an analog image signal obtained by photographing the meter and outputs the analog image signal to the A / D converter.
  • the A / D converter performs analog-digital conversion on the output analog image signal to sequentially generate digital image data, and outputs the digital image data to the CPU 120.
  • an equal magnification optical system type CIS Contact Image Sensor
  • CMOS Complementary Metal Metal Oxide Semiconductor
  • digital image data output by the meter captured by the imaging device 104 may be referred to as an input image.
  • the storage device 110 is an example of a storage unit.
  • the storage device 110 includes a memory device such as a RAM (Random Access Memory) and a ROM (Read Only Memory), a fixed disk device such as a hard disk, or a portable storage device such as a flexible disk and an optical disk. Further, the storage device 110 stores computer programs, databases, tables, and the like used for various processes of the image processing apparatus 100.
  • the computer program may be installed from a computer-readable portable recording medium such as a CD-ROM (compact disk read only memory) or a DVD ROM (digital versatile disk read only memory).
  • the computer program is installed in the storage device 110 using a known setup program or the like.
  • the storage device 110 also stores a management table that manages information related to each input image.
  • the CPU 120 operates based on a program stored in the storage device 110 in advance.
  • the CPU 120 may be a general purpose processor. Instead of the CPU 120, a DSP (digital signal processor), an LSI (large scale integration), or the like may be used. Instead of the CPU 120, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like may be used.
  • DSP digital signal processor
  • LSI large scale integration
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the CPU 120 is connected to the communication device 101, the input device 102, the display device 103, the imaging device 104, the storage device 110, and the processing circuit 130, and controls these units.
  • the CPU 120 performs data transmission / reception control via the communication device 101, input control of the input device 102, display control of the display device 103, imaging control of the imaging device 104, control of the storage device 110, and the like. Further, the CPU 120 recognizes a numerical value in the meter reflected in the input image generated by the imaging device 104 and stores the evidence image in the storage device 110.
  • the processing circuit 130 performs predetermined image processing such as correction processing on the input image acquired from the imaging device 104.
  • predetermined image processing such as correction processing on the input image acquired from the imaging device 104.
  • an LSI, DSP, ASIC, FPGA, or the like may be used as the processing circuit 130.
  • FIG. 2 is a diagram showing a schematic configuration of the storage device 110 and the CPU 120.
  • the storage device 110 stores programs such as a numerical value recognition program 111, a determination program 112, and a control program 113.
  • Each of these programs is a functional module implemented by software operating on the processor.
  • the CPU 120 functions as the numerical value recognition unit 121, the determination unit 122, and the control unit 123 by reading each program stored in the storage device 110 and operating according to each read program.
  • FIG. 3 is a flowchart showing an example of the operation of the entire process performed by the image processing apparatus 100.
  • the operation flow described below is mainly executed by the CPU 120 in cooperation with each element of the image processing apparatus 100 based on a program stored in the storage device 110 in advance.
  • the numerical value recognition unit 121 receives a shooting start instruction when a user inputs a shooting start instruction for instructing the start of shooting with the input device 102 and receives a shooting start instruction signal from the input device 102. (Step S101).
  • the numerical value recognition unit 121 initializes information used for image processing, sets parameters such as the shooting size and focus of the imaging device 104, and causes the imaging device 104 to take a meter. To generate an input image.
  • the numerical value recognition unit 121 sequentially stores input images sequentially generated by the imaging device 104 in the storage device 110.
  • the numerical value recognition unit 121 executes a partial area detection process (step S102).
  • the numerical value recognition unit 121 detects a partial area corresponding to each digit of the numerical value in the meter shown in the input image generated by the imaging device 104. Details of the partial area detection processing will be described later.
  • the numerical value recognition unit 121 determines whether a partial area that can be used in the numerical value recognition process is detected in the partial area detection process (step S103).
  • the numerical value recognition unit 121 When the usable partial area is not extracted in the numerical value recognition process, the numerical value recognition unit 121 returns the process to step S102, and executes the partial area extraction process on the newly generated input image. On the other hand, when a partial area that can be used in the numerical value recognition process is extracted, the numerical value recognition unit 121 executes the numerical value recognition process (step S104). In the numerical value recognition processing, the numerical value recognition unit 121 identifies and counts the numerical values in the meter shown in the sequentially generated input images, and recognizes the numerical values in the meter based on the totaled result. Details of the numerical value recognition processing will be described later.
  • the numerical value recognition unit 121 determines whether or not the numerical value in the meter has been recognized in the numerical value recognition process (step S105).
  • the numerical value recognition unit 121 When the numerical value in the meter cannot be recognized, the numerical value recognition unit 121 returns the process to step S102, and repeats the processes of steps S102 to S105 for the newly generated input image.
  • the determination unit 122 and the control unit 123 execute evidence image storage processing (step S106).
  • the determination unit 122 determines whether each partial area is clear. Further, for each digit, the control unit 123 selects an input image in which the partial area corresponding to each digit is determined to be clear, and uses the image corresponding to the selected input image as an evidence image.
  • the numerical value recognized by 121 is associated and stored in the storage device 110. Details of the evidence image storage process will be described later.
  • control unit 123 displays the numerical value recognized by the numerical value recognition unit 121 and / or the evidence image selected by the control unit 123 on the display device 103 (step S107), and ends a series of steps. Further, the control unit 123 may transmit the numerical value recognized by the numerical value recognition unit 121 and / or the evidence image selected by the control unit 123 to the server device via the communication device 101.
  • FIG. 4 is a flowchart showing an example of the operation of the partial area detection process. The operation flow shown in FIG. 4 is executed in step S102 of the flowchart shown in FIG.
  • the numerical value recognition unit 121 detects a plate frame from the input image (step S201).
  • FIG. 5A is a diagram showing an example of an input image 500 obtained by photographing a meter (device).
  • a meter has a black casing 501 and a white plate 502 inside the casing 501.
  • the plate 502 is visible through glass (not shown), and a meter portion 503 on which a numerical value such as the amount of electric power measured by the meter is displayed is disposed on the plate 502.
  • the numerical value is shown in white and the background is shown in black.
  • the numerical value recognition unit 121 detects the outer edge of the plate 502 as a plate frame.
  • a meter whose number of numerical values to be measured is four will be described as an example, but any number of numerical values to be measured by the meter may be any number as long as it is two or more.
  • the numerical value recognition unit 121 is a difference between luminance values or color values (R value, B value, G value) of pixels adjacent to each other in the horizontal and vertical directions in the input image or a plurality of pixels separated from the pixels by a predetermined distance. If the absolute value of exceeds the first threshold, the pixel is extracted as an edge pixel.
  • the numerical value recognition unit 121 extracts a straight line that passes through the vicinity of each extracted edge pixel by using the Hough transform or the least squares method, and two of the extracted straight lines are obtained from four straight lines that are approximately orthogonal to each other. Among the rectangles to be configured, the largest rectangle is detected as a plate frame.
  • the numerical value recognition unit 121 determines whether each extracted edge pixel is connected to other edge pixels, and labels the connected edge pixels as one group.
  • the numerical value recognition unit 121 may detect, as a plate frame, an outer edge of a region surrounded by the largest group among the extracted groups.
  • the numerical value recognition unit 121 may detect the plate frame using the difference between the color of the housing 501 and the color of the plate 502.
  • the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value (shows black), and a pixel adjacent to the pixel on the right side or a pixel separated from the pixel by a predetermined distance on the right side If the color value is greater than or equal to the second threshold (indicating white), that pixel is extracted as the left edge pixel.
  • the second threshold value is set to an intermediate value between the value indicating black and the value indicating white.
  • the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value and that is adjacent to the pixel on the left side or a pixel that is a predetermined distance away from the pixel on the left side. Is greater than or equal to the second threshold, the pixel is extracted as the right edge pixel. Similarly, the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value, and the luminance value or the pixel of the pixel adjacent to the pixel on the lower side or the pixel separated by a predetermined distance from the pixel on the lower side. If the color value is greater than or equal to the second threshold, the pixel is extracted as the upper edge pixel.
  • the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value and is adjacent to the pixel on the upper side or a pixel that is a predetermined distance above the pixel. Is greater than or equal to the second threshold, the pixel is extracted as the lower edge pixel.
  • the numerical value recognition unit 121 extracts a straight line connecting the extracted left end edge pixel, right end edge pixel, upper end edge pixel, and lower end edge pixel by using Hough transform or least square method, and is configured from the extracted straight lines.
  • the rectangle to be detected is detected as a plate frame.
  • the numerical value recognition unit 121 detects a numerical background frame from the detected area within the plate frame (step S202).
  • the numerical value recognition unit 121 detects the outer edge of the meter portion 503 as a numerical value background frame.
  • the numerical value recognition unit 121 uses a discriminator that has been pre-learned to output the position information of the outer edge of the meter portion 503 when an image showing the plate 502 including the meter portion 503 is input. To detect. This discriminator is pre-learned using a plurality of images obtained by photographing the meter, for example, by deep learning, and stored in the storage device 110 in advance. The numerical value recognition unit 121 detects the numerical value background frame by inputting an image including the detected plate frame to the classifier and acquiring the position information output from the classifier.
  • the numerical value recognition unit 121 may detect the numerical value background frame based on the edge pixels in the input image, as in the case of detecting the plate frame.
  • the numerical value recognition unit 121 extracts edge pixels from the region including the plate frame of the input image, extracts straight lines passing through the vicinity of each extracted edge pixel, and two of the extracted straight lines are approximately four orthogonal to each other. The largest rectangle among the rectangles composed of straight lines is detected as a numerical background frame.
  • the numerical value recognition part 121 may detect the outer edge of the area
  • the numerical value recognition unit 121 may detect the plate frame using the difference between the color of the plate 502 and the color of the meter portion 503, as in the case of detecting the plate frame.
  • the numerical value recognition unit 121 has a luminance value or a color value of each pixel equal to or greater than a second threshold value (indicating white), and the luminance value of a pixel adjacent to the pixel on the right side or a pixel separated by a predetermined distance from the pixel to the right side If the color value is less than the second threshold value (indicating black), the pixel is extracted as the left edge pixel.
  • the numerical value recognition unit 121 extracts the right edge pixel, the upper edge pixel, and the lower edge pixel.
  • the numerical value recognition unit 121 extracts straight lines that pass through the vicinity of the extracted left end edge pixel, right end edge pixel, upper end edge pixel, and lower end edge pixel by using the Hough transform or the least square method. Is detected as a numeric background frame.
  • the numerical value recognition unit 121 detects the mark 504 and is sandwiched between the marks 504 in the horizontal and vertical directions.
  • a numerical background frame may be detected within the region.
  • the numerical value recognition unit 121 detects a partial region corresponding to each digit of the numerical value in the meter from the detected region in the numerical value background frame (step S203).
  • FIG. 5B is a diagram for explaining a partial region.
  • An image 510 shown in FIG. 5B shows a numerical background frame of the meter portion 503 detected from the input image 500.
  • the numerical value recognition unit 121 detects the rectangular areas 511 to 514 including the numerical value values in the numerical value background frame as partial areas.
  • the numerical value recognition unit 121 is a classifier that has been pre-learned so as to output position information of each rectangular area including each digit value of the numerical value in the meter portion 503 when an image in which the meter portion 503 is reflected is input. Thus, the partial area is detected.
  • This discriminator is pre-learned using a plurality of images obtained by photographing the meter, for example, by deep learning, and stored in the storage device 110 in advance.
  • the numerical value recognition unit 121 detects a partial region by inputting an image including the detected numerical value background frame to a classifier and acquiring position information output from the classifier.
  • the numerical value recognition unit 121 may detect the numerical value background frame based on the edge pixels in the input image, as in the case of detecting the plate frame.
  • the numerical value recognition unit 121 extracts edge pixels from the area including the numerical value background frame of the input image, extracts straight lines passing through the vicinity of the extracted edge pixels, and two of the extracted straight lines are approximately orthogonal to each other. The largest rectangular area is detected from the four straight lines.
  • the numerical value recognition unit 121 detects a region surrounded by a group having the largest area among the groups in which the extracted edge pixels are connected to each other.
  • the numerical value recognition unit 121 detects a single-digit number from each detected region using a known OCR (Optical Character Recognition) technology, and detects a single-digit number as a partial region. To do.
  • OCR Optical Character Recognition
  • An image 520 shown in FIG. 5B shows an image in which the entire meter portion 503 is unclear and the difference in luminance value or color value between the numerical value portion and the background portion is small. In the image 520, the rectangular areas 521 to 524 are not detected as partial areas.
  • An image 530 shown in FIG. 5B shows an image in which ambient light such as illumination is reflected on the glass portion covering the front surface of the meter, so that disturbance light is reflected on the entire meter portion 503 and all numerical values cannot be identified. In the image 530, the rectangular areas 531 to 534 are not detected as partial areas.
  • FIG. 5B shows an image in which disturbance light is reflected in a part of the meter portion 503 and a numerical value in which disturbance light is reflected cannot be identified.
  • the rectangular areas 541, 543, and 544 are detected as partial areas, but the rectangular area 542 is not detected as a partial area.
  • An image 550 shown in FIG. 5B shows an image in which a shadow is reflected in a part of the meter portion 503 and a numerical value where the shadow is reflected is erroneously recognized.
  • the rectangular areas 551 to 554 are detected as partial areas.
  • the numerical value recognition unit 121 assigns a digit number to each detected partial area (step S204).
  • the numerical value recognition unit 121 divides the region in the numerical value background frame equally in the horizontal direction by the number of digits of the numerical value indicated by the meter, and assigns the digit numbers in ascending order from the right region (1 from the rightmost region in order). 2, 3, 4 digit numbers are assigned).
  • the numerical value recognition unit 121 assigns a digit number assigned to an area including the center position of each partial area to each detected partial area.
  • the numerical value recognition unit 121 determines whether or not each detected partial area can be used in the numerical value recognition process (step S205).
  • the numerical value recognition unit 121 determines whether or not each partial area can be used in the numerical value recognition process based on whether or not each partial area includes blur or shine.
  • a blur is a region where the difference in luminance value of each pixel in the image is small due to defocusing of the imaging device 104, or the same object is reflected in a plurality of pixels in the image due to a user's camera shake. It means an area where the difference in luminance value of each pixel is small.
  • the term “shine” means a region where the luminance value of a pixel in a predetermined region in the image is saturated (out-of-white) due to the influence of disturbance light or the like.
  • the numerical value recognition unit 121 includes blur in each partial region by a discriminator that is pre-learned to output a blur degree indicating the degree of blur included in the input image when the image is input. Determine whether or not.
  • This discriminator is pre-learned using an image obtained by photographing a meter and not including blur by, for example, deep learning, and is stored in the storage device 110 in advance. Note that this discriminator may be pre-learned using an image obtained by photographing a meter and including blur.
  • the numerical value recognition unit 121 inputs an image including the detected partial area to the classifier, and whether or not the partial area includes blur depending on whether or not the degree of blur output from the classifier is equal to or greater than the third threshold. Determine.
  • the numerical value recognition unit 121 may determine whether or not each partial area includes blur based on the edge strength of the luminance value of each pixel included in the partial area.
  • the numerical value recognition unit 121 calculates the absolute value of the difference between the luminance values of the pixels adjacent to each other in the horizontal or vertical direction in the partial region or a plurality of pixels separated from the pixel by a predetermined distance as the edge strength of the pixel. .
  • the numerical value recognition unit 121 determines whether or not the partial area includes blur depending on whether or not the average value of the edge intensities calculated for each pixel in the partial area is equal to or less than the fourth threshold value.
  • the numerical value recognition unit 121 may determine whether or not each partial area includes blur based on the distribution of luminance values of the pixels included in the partial area.
  • the numerical value recognition unit 121 generates a histogram of the luminance value of each pixel in the partial region, and detects a local maximum value in each of the luminance value range indicating the numerical value (white) and the luminance value range indicating the background (black). Then, the average value of the full width at half maximum of each maximum value is calculated.
  • the numerical value recognition unit 121 determines whether or not the partial area includes blur depending on whether or not the average value of the calculated half-value widths of the local maximum values is equal to or greater than the fifth threshold value.
  • the numerical value recognition unit 121 includes shine in each partial region by a discriminator that is pre-learned so as to output a degree of shine indicating the degree of shine included in the input image when an image is input. It is determined whether or not.
  • This discriminator is pre-learned using, for example, an image obtained by photographing a meter and not including shine by deep learning or the like, and is stored in the storage device 110 in advance. Note that this discriminator may be pre-learned using an image obtained by photographing a meter and including shine.
  • the numerical value recognition unit 121 inputs an image including the detected partial area to the classifier, and whether or not the partial area includes shine depending on whether or not the degree of shine output from the classifier is equal to or greater than a sixth threshold. Determine.
  • the numerical value recognition unit 121 may determine whether or not each partial area includes shine based on the luminance value of each pixel included in the partial area.
  • the numerical value recognition unit 121 calculates the number of pixels whose luminance value is greater than or equal to the seventh threshold value (white) among the pixels in the partial region, and determines whether or not the calculated number is equal to or greater than the eighth threshold value. It is determined whether or not shine is included.
  • the numerical value recognition unit 121 may determine whether or not each partial area includes a shine based on the distribution of luminance values of each pixel included in the partial area.
  • the numerical value recognition unit 121 generates a histogram of the luminance value of each pixel in the partial area, and determines whether or not the partial area depends on whether the number of pixels distributed in the area equal to or higher than the seventh threshold is equal to or higher than the eighth threshold. Whether or not is included is determined.
  • each threshold value and each range described above are set in advance by a prior experiment.
  • the numerical value recognition unit 121 stores the information of the partial area determined to be usable in the management table of the storage device 110 (step S206), and ends a series of steps.
  • FIG. 6 is a diagram showing an example of the data structure of the management table.
  • the input image ID is uniquely assigned for each input image.
  • the evidence image information is information indicating the storage destination of the evidence image.
  • the evidence image is an image stored as an evidence image.
  • the evidence image for example, an image obtained by cutting out an area within the numerical background frame from the input image is used.
  • the evidence image is an example of at least a part of the input image.
  • the partial area information is information indicating the storage destination of the partial area image, and is stored for each partial area corresponding to each digit.
  • the partial area image is an image obtained by cutting out a partial area from the input image.
  • the digit value is a digit value specified in each partial area, and is stored for each digit.
  • the numerical value recognition unit 121 cuts out each partial area determined to be usable from the input image, generates a partial area image, stores the partial area image in the storage device 110, and stores the storage destination in the management table as partial area information. . Note that the numerical value recognition unit 121 does not cut out each partial area determined to be usable from the input image, but may store position information of each partial area in the input image as partial area information in the management table. Good.
  • the numerical value recognition unit 121 when there is a partial area determined to be usable in the input image, the numerical value recognition unit 121 generates an evidence image by cutting out the area in the numerical background frame from the input image, and stores it in the storage device 110. In addition to storing, the storage location is stored in the management table as evidence image information. Note that an image obtained by cutting out an area within the plate frame from the input image or the input image itself may be used as the evidence image. In that case, the numerical value recognition unit 121 stores the image obtained by cutting out the region in the plate frame from the input image or the input image itself as an evidence image in the storage device 110, and stores the storage location as evidence image information in the management table. To remember.
  • FIG. 7 is a flowchart showing an example of the operation of numerical value recognition processing.
  • the operation flow shown in FIG. 7 is executed in step S104 of the flowchart shown in FIG.
  • Each process of steps S301 to S309 in FIG. 7 is executed for each partial area determined to be usable in the numerical value recognition process. That is, each process of steps S301 to S309 in FIG. 7 is executed for each digit of the numerical value in the meter that is shown in the input image sequentially generated by the imaging device 104.
  • the numerical value recognition unit 121 determines whether or not the digit value of the digit number assigned to the partial area to be processed has been confirmed (step S301).
  • the numerical value recognition unit 121 moves the process to step S310. On the other hand, when the digit value of the digit number has not been determined, the numerical value recognition unit 121 identifies the digit value reflected in the partial area (step S302).
  • the numerical value recognition unit 121 when an image including a single-digit numerical value is input, the digit value reflected in the partial region by a classifier that has been previously learned so as to output the numerical value reflected in the image. Is identified. This discriminator is pre-learned using a plurality of images obtained by photographing each numerical value in the meter, for example, by deep learning, and stored in the storage device 110 in advance.
  • the numerical value recognition unit 121 inputs an image including the partial area to the discriminator, and specifies the numerical value output from the discriminator as a digit value reflected in the partial area. Note that the numerical value recognition unit 121 may specify a digit value shown in the partial area using a known OCR technique.
  • the numerical value recognition unit 121 converts the specified numerical value into an integer (step S303).
  • a meter displays a measured value on a plate by rotating a cylindrical drum having a plurality of numbers printed on its side surface. Therefore, if the meter is photographed while the printed numbers are moving as the drum rotates, the partial area may include a plurality of numerical values. Therefore, the discriminator is pre-learned so as to output a decimal number according to the area in which each numerical value is shown when the partial area includes a plurality of numerical values.
  • the numerical value recognition unit 121 rounds up, rounds, or rounds off the decimal part of the numerical value output from the discriminator, and converts the numerical value into an integer.
  • the numerical value recognition unit 121 stores the specified numerical value in the management table of the storage device 110 as the digit value of the digit number assigned to the partial area (step S304).
  • the numerical value recognition unit 121 determines whether or not numerical value recognition processing has been performed on a predetermined number (for example, 10) or more input images (step S305).
  • the numerical value recognition unit 121 moves the process to step S310.
  • the numerical value recognition unit 121 specifies the mode value among the digit values specified for the most recent predetermined number of input images (Ste S306).
  • the numerical value recognizing unit 121 refers to the management table and identifies the most frequently stored digit value among the predetermined number of digit values closest to the digit number assigned to the partial area to be processed as the mode value. .
  • the numerical value recognition unit 121 calculates the ratio of the number of occurrences (in the most recent predetermined number) of the specified mode value to the predetermined number (step S307).
  • the numerical value recognition unit 121 determines whether or not the calculated ratio exceeds the ninth threshold value (step S308).
  • the ninth threshold is set to 50%.
  • the numerical value recognition unit 121 When the calculated ratio does not exceed the ninth threshold value, the numerical value recognition unit 121 considers that the mode value is not yet a reliable value, and proceeds to step S310. On the other hand, when the calculated ratio exceeds the ninth threshold value, the numerical value recognition unit 121 identifies the identified mode value as a confirmed digit value (step S309). In this way, the numerical value recognition unit 121 determines the digit value only when the calculated ratio exceeds the ninth threshold value, so that the reliability of the recognized numerical value can be further increased.
  • the numerical value recognition unit 121 determines whether or not the processing has been completed for all the detected partial areas (step S310).
  • the numerical value recognition unit 121 determines whether or not the digit values of all the digit numbers have been determined (step S311).
  • the numerical value recognition unit 121 ends the series of steps without executing any particular process.
  • the numerical value recognition unit 121 recognizes the numerical value obtained by combining the confirmed digit values specified for each of all the digits as the numerical value in the meter (step S312). This step is finished.
  • the numerical value recognition unit 121 identifies and counts the numerical values in the meter shown in the sequentially generated input images for each digit, and recognizes the numerical values in the meter based on the counting results.
  • the numerical value recognition unit 121 specifies the numerical values of other digits even for an input image in which a numerical value of a specific digit cannot be specified, and uses it for aggregation. Therefore, the numerical value in the meter can be accurately obtained using fewer input images. Can be recognized. Since the user does not need to continue to capture the meter until an input image that can identify all the digits is generated, the image processing apparatus 100 can improve user convenience.
  • the numerical value recognition unit 121 may identify and count all the numerical values in the meter shown in each of the sequentially generated input images, and recognize the numerical values in the meter based on the counting result. .
  • step S305 the numerical value recognizing unit 121 performs the processes in and after step S306 if the number of input images on which the numerical value recognition processing has been executed is not equal to or greater than a predetermined number, but the number of digits can be determined. May be executed.
  • the predetermined number is 10 and the ninth threshold value is 50%
  • the digit values specified for each input image are all the same when the number of input images subjected to the numerical value recognition process is six.
  • the digit value is the mode value, and the ratio of the number of occurrences of the mode value is 60% or more.
  • the numerical value recognition unit 121 may determine the numerical value to be recognized even if the number of input images on which numerical value recognition processing has been executed is not equal to or greater than a predetermined number. Thereby, the numerical value recognition part 121 can shorten the recognition time by numerical value recognition processing.
  • FIG. 8 is a flowchart showing an example of operation of evidence image storage processing. The operation flow shown in FIG. 8 is executed in step S106 of the flowchart shown in FIG.
  • the determination unit 122 determines whether each partial region is clear for every partial region detected by the numerical value recognition unit 121 (step S401). Note that the determination unit 122 determines whether or not each partial region is clear only for the partial region in which the digit value specified by the numerical value recognition unit 121 matches the corresponding digit value in the numerical value recognized by the numerical value recognition unit 121. You may judge.
  • the clear partial area means that the numerical value included in the partial area can be recognized, and that the partial area does not include blur or shine.
  • that the partial area is unclear means that the numerical value included in the partial area cannot be recognized, and means that the partial area includes blur or shine.
  • the determination unit 122 determines whether each partial area is clear depending on whether each partial area includes blur or shine.
  • the determination unit 122 determines whether each partial region includes blur or shine by using a discriminator that has captured a meter and previously learned using at least an image that does not include blur or shine. Determine whether. Alternatively, the determination unit 122 determines whether each partial area includes blur or shine, based on the luminance value of the image in each partial area, in the same manner as the process of step S205. However, the determination unit 122 makes it easier to determine that each partial area includes blur or shine than in the process of step S205.
  • the determination unit 122 makes the third threshold value, the fifth threshold value, the sixth threshold value, or the eighth threshold value smaller than the value used in the process of step S205, and the fourth threshold value is a value used in the process of step S205. Larger than.
  • the determination unit 122 may determine whether or not each partial region includes blur or shine according to the same determination criterion as that in step S205. In that case, the determination unit 122 may determine whether each partial region includes blur or shine using the determination result in the process of step S205. Thereby, the determination part 122 can shorten the processing time by determination processing, and can reduce processing load.
  • control unit 123 determines whether or not there is one input image for which it is determined that the partial areas corresponding to all the digits are clear (step S402).
  • control unit 123 selects the one input image.
  • the control unit 123 stores only the evidence image corresponding to the selected one input image as an evidence image, associates the numerical values recognized by the numerical value recognition unit 121 with each other, and stores them in the storage device 110 (step S403). End the step.
  • step S404 determines whether a partial evidence image is used as the evidence image. A digit whose region is not determined to be clear is identified (step S404). When the process of step S404 is first executed, all the digits are specified as digits for which the partial area is not determined to be clear.
  • control unit 123 selects an input image including the largest number of partial areas determined to be clear for the specified digit as an input image using the corresponding evidence image as the evidence image (step S405). Thus, the control unit 123 selects an input image for which each partial area is determined to have a clear partial area corresponding to each digit.
  • control unit 123 determines whether or not there remains a digit for which the partial area is not determined to be clear in any of the selected input images (step S406).
  • control unit 123 If there are still digits that have not been determined to be clear, the control unit 123 returns the process to step S404 to re-specify digits that have not been determined to be clear. On the other hand, if there are no remaining digits that are not determined to be clear in the partial area, the control unit 123 stores the evidence images corresponding to the plurality of input images selected in step S405 in the storage device 110 as evidence images. Store (step S407). And the control part 123 complete
  • FIG. 9 is a diagram for explaining a combination of evidence images stored in the storage device 110.
  • the evidence image of the input image 1 is used as the evidence image.
  • the combination 2 corresponds to 3 digits (digits 1 to 3) in the input image 1 where there is no input image in which the partial areas corresponding to all the digits (digits 1 to 4) are clear.
  • a combination when it is determined that only the partial area is clear is shown.
  • the evidence image of the input image 1 and the evidence image of the input image 2 in which the partial area corresponding to the digit 4 that is not determined to be clear in the input image 1 is clear are evidence. Used as an image.
  • the combination 4 indicates a combination in the case where there is no input image in which the partial areas corresponding to the digits 3 and 4 that are not determined to be clear in the input image 1 are determined to be clear.
  • the evidence image of the input image 3 is used as the evidence image.
  • Combination 5 indicates a combination when there is no input image determined that the partial area corresponding to 2 digits is clear.
  • the evidence images of the input images 1 to 4 that are determined to have clear partial areas corresponding to the digits 1 to 4 are used as evidence images.
  • the control unit 123 uses, as evidence images, the evidence images corresponding to any of the input images determined to be clear for each of the partial regions corresponding to all the digits. use.
  • the control unit 123 uses, as an evidence image, an evidence image of another input image that complements a partial region that has not been determined to be clear in an input image in which the evidence image is used as an evidence image.
  • the control unit 123 stores a plurality of images having evidence ability even when there is no input image in which all the partial areas are determined to be clear, thereby enabling an appropriate image as an evidence image. Can be stored efficiently.
  • the user does not need to continue photographing the meter until an input image determined that all the partial areas are clear is generated, and the image processing apparatus 100 can improve user convenience. .
  • the image processing apparatus 100 does not need to store images related to all input images used in the numerical value recognition process, and can reduce the storage capacity of the storage device 110.
  • the control unit 123 causes the imaging device 104 to image the meter until evidence images in which the partial areas corresponding to all the digits are clear are prepared.
  • the input image may be continuously generated.
  • the control unit 123 determines whether each partial area is clear for the newly generated input image. Continue the determination.
  • the control unit 123 causes the imaging device 104 to stop the imaging of the meter and ends the evidence image storage process. As a result, the image processing apparatus 100 can acquire an evidence image that can be visually identified more reliably.
  • control unit 123 when the numerical image recognition unit 121 recognizes the numerical value in the meter, when the evidence image in which the partial areas corresponding to all the digits are clear is not prepared within a predetermined time, the meter by the user A shooting stop instruction for instructing to stop shooting may be received.
  • the control unit 123 displays display data such as a button for accepting a photographing stop instruction from the user on the display device 103.
  • the control unit 123 receives the shooting stop instruction.
  • the control unit 123 selects, for each digit, a partial region determined to be the clearest from the partial regions corresponding to each digit, and includes each selected partial region.
  • the evidence image of each input image is used as the evidence image.
  • the control unit 123 for example, has the clearest partial area with the smallest degree of blur output from each classifier, the partial area with the largest average value of edge strength, or the partial area with the smallest average value of half width. Select as a partial area.
  • control unit 123 outputs pixels that are distributed in the partial region with the smallest degree of shine output from each classifier, the number of pixels having a luminance value equal to or higher than the seventh threshold value, or the region equal to or higher than the seventh threshold value in the histogram.
  • the partial area having the smallest number is selected as the partial area that is the clearest.
  • the image processing apparatus 100 uses, for each digit of the numerical value in the meter, an evidence image corresponding to the input image determined to have a clear partial area corresponding to each digit as an evidence image.
  • the image processing apparatus 100 can efficiently store an appropriate image as an evidence image.
  • FIG. 10 is a block diagram showing a schematic configuration of the processing circuit 230 in the image processing apparatus according to another embodiment.
  • the processing circuit 230 is used instead of the processing circuit 130 of the image processing apparatus 100, and executes the entire processing instead of the CPU 120.
  • the processing circuit 230 includes a numerical value recognition circuit 231, a determination circuit 232, a control circuit 233, and the like.
  • the numerical value recognition circuit 231 is an example of a numerical value recognition unit, and has the same function as the numerical value recognition unit 121.
  • the numerical value recognition circuit 231 sequentially acquires input images obtained by photographing the meter from the imaging device 104 and sequentially stores them in the storage device 110.
  • the numerical value recognition circuit 231 identifies and counts the numerical values in the meter shown in each input image, recognizes the numerical values in the meter based on the totaled results, and stores the recognition results in the storage device 110.
  • the determination circuit 232 is an example of a determination unit and has the same function as the determination unit 122.
  • the determination circuit 232 determines whether or not each partial area is clear for each partial area corresponding to each digit of the numerical value in the meter, and outputs the determination result to the control circuit 233.
  • the control circuit 233 is an example of a control unit and has the same function as the control unit 123. For each digit, the control circuit 233 selects an input image for which the partial area corresponding to each digit is determined to be clear, and uses the evidence image corresponding to the selected input image as the evidence image for the numerical value recognition circuit 231. Are stored in the storage device 110 in association with each other.
  • the image processing apparatus 100 can efficiently store an appropriate image as an evidence image even when the processing circuit 230 is used.
  • each discriminator used in the partial area detection process, the numerical value recognition process, or the evidence image storage process may be stored in an external device such as a server device instead of being stored in the storage device 110.
  • the numerical value recognition unit 121 transmits each image to the server apparatus via the communication apparatus 101, and receives and acquires the identification result output from each classifier from the server apparatus.
  • the image processing apparatus 100 is not limited to a portable information processing apparatus, and may be, for example, a fixed point camera or the like installed so that a meter can be imaged.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

Provided are an image processing device, a control method, and a control program which make it possible to efficiently store an appropriate image as a proof image. An image processing device has: a storage unit; an image pickup unit for sequentially generating input images by taking images of a meter; a numerical value recognition unit for specifying and compiling numerical values in the meter in the sequentially generated input images, and on the basis of the compilation result, recognizing the numerical values in the meter; a determination unit for determining, for each partial area corresponding to each digit of the numerical values in the meter, whether each partial area is clear or not; and a control unit for selecting, for each digit, an input image in which the partial area corresponding to each digit is determined to be clear, and storing, in the storage unit, at least part of the selected input image as a proof image with the at least part of the selected input image associated with the numerical values recognized by the numerical value recognition unit.

Description

画像処理装置、制御方法及び制御プログラムImage processing apparatus, control method, and control program
 本開示は、画像処理装置、制御方法及び制御プログラムに関し、特に、メータを撮影した画像からメータ内の数値を認識する画像処理装置、制御方法及び制御プログラムに関する。 The present disclosure relates to an image processing device, a control method, and a control program, and more particularly, to an image processing device, a control method, and a control program for recognizing a numerical value in a meter from an image captured by the meter.
 工場、家屋等では、設備点検作業において、作業者が電力量等のメータ(装置)から電力量等を示す数値を目視により読み取り、紙の台帳である点検簿に記録している。しかしながら、このような人手による作業では、人為的ミスにより誤った数値が点検簿に記録され、手戻りが発生する可能性があった。このような問題を解消するために、近年、設備点検作業において、カメラでメータを撮影した画像から、コンピュータにより数値を自動認識する技術が利用されている。 In factories, houses, etc., during facility inspection work, workers visually read numerical values indicating the amount of power from a meter (device) such as the amount of power and record it in an inspection book, which is a paper ledger. However, in such a manual operation, an erroneous numerical value is recorded in the inspection book due to human error, and there is a possibility that reworking may occur. In order to solve such problems, in recent years, a technology for automatically recognizing numerical values by a computer from an image obtained by photographing a meter with a camera has been used in equipment inspection work.
 デジタル式計器が示す数値を含むシーンを撮像し、撮像した画像を処理して、計器が示す数値を検出する計器読み取り装置が開示されている(特許文献1を参照)。 An instrument reading device is disclosed that captures a scene including a numerical value indicated by a digital instrument, processes the captured image, and detects a numerical value indicated by the instrument (see Patent Document 1).
特許第5530752号Japanese Patent No. 5530752
 カメラでメータを撮影した画像から、コンピュータにより数値を自動認識する場合、カメラで撮影した画像を、作業者等が後で確認できるように、証拠画像として保存しておくことが望ましい。その場合、証拠画像として適切な画像を効率良く保存できることが望まれている。 When the numerical value is automatically recognized by the computer from the image taken by the camera, it is desirable to save the image taken by the camera as an evidence image so that an operator or the like can confirm later. In that case, it is desired that an appropriate image as an evidence image can be efficiently stored.
 画像処理装置、制御方法及び制御プログラムの目的は、証拠画像として適切な画像を効率良く記憶することを可能とすることにある。 The purpose of the image processing apparatus, control method, and control program is to enable efficient storage of appropriate images as evidence images.
 本発明の一側面に係る画像処理装置は、記憶部と、メータを撮影した入力画像を順次生成する撮像部と、順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、メータ内の数値を認識する数値認識部と、メータ内の数値の各桁に対応する各桁毎に、各桁に対応する部分領域が鮮明であるか否かを判定する判定部と、部分領域毎に、その部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像の少なくとも一部を証拠画像として数値認識部が認識した数値を関連付けて記憶部に記憶する制御部と、を有する。 An image processing apparatus according to an aspect of the present invention specifies a storage unit, an imaging unit that sequentially generates an input image obtained by capturing a meter, and a numerical value in the meter that is reflected in each of the sequentially generated input images. The numerical value recognition unit for recognizing the numerical value in the meter based on the counting result, and whether or not the partial area corresponding to each digit is clear for each digit corresponding to each digit of the numerical value in the meter. For each partial region to be determined, the determination unit selects an input image for which the partial region is determined to be clear, and associates the numerical value recognized by the numerical value recognition unit with at least a part of the selected input image as an evidence image And a control unit stored in the storage unit.
 また、本発明の一側面に係る制御方法は、記憶部と、メータを撮影した入力画像を順次生成する撮像部と、を有する画像処理装置の制御方法であって、順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、メータ内の数値を認識し、メータ内の数値の各桁に対応する部分領域毎に、各部分領域が鮮明であるか否かを判定し、各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像の少なくとも一部を証拠画像として認識した数値を関連付けて記憶部に記憶することを含む。 A control method according to an aspect of the present invention is a control method for an image processing apparatus that includes a storage unit and an imaging unit that sequentially generates input images obtained by capturing a meter. The numerical values in the meter reflected in the meter are identified and counted, and the numerical values in the meter are recognized based on the counting results, and each partial area is clear for each partial area corresponding to each digit in the meter. A numerical value that determines whether or not there is an image, and for each digit, selects an input image for which the partial area corresponding to each digit is determined to be clear, and recognizes at least part of the selected input image as an evidence image Are stored in the storage unit in association with each other.
 また、本発明の一側面に係る制御プログラムは、記憶部と、メータを撮影した入力画像を順次生成する撮像部と、を有する画像処理装置の制御プログラムであって、順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、メータ内の数値を認識し、メータ内の数値の各桁に対応する部分領域毎に、各部分領域が鮮明であるか否かを判定し、各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像の少なくとも一部を証拠画像として認識した数値を関連付けて記憶部に記憶することを画像処理装置に実行させる。 A control program according to an aspect of the present invention is a control program for an image processing apparatus that includes a storage unit and an imaging unit that sequentially generates input images obtained by photographing a meter. The numerical values in the meter reflected in the meter are identified and counted, and the numerical values in the meter are recognized based on the counting results, and each partial area is clear for each partial area corresponding to each digit in the meter. A numerical value that determines whether or not there is an image, and for each digit, selects an input image for which the partial area corresponding to each digit is determined to be clear, and recognizes at least part of the selected input image as an evidence image Are associated with each other and stored in the storage unit.
 本実施形態によれば、画像処理装置、制御方法及び制御プログラムは、証拠画像として適切な画像を効率良く記憶することが可能となる。 According to the present embodiment, the image processing apparatus, the control method, and the control program can efficiently store an appropriate image as an evidence image.
 本発明の目的及び効果は、特に請求項において指摘される構成要素及び組み合わせを用いることによって認識され且つ得られるだろう。前述の一般的な説明及び後述の詳細な説明の両方は、例示的及び説明的なものであり、特許請求の範囲に記載されている本発明を制限するものではない。 The objects and advantages of the invention will be realized and obtained by means of the elements and combinations particularly pointed out in the appended claims. Both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention as claimed.
実施形態に従った画像処理装置100の概略構成の一例を示す図である。It is a figure which shows an example of schematic structure of the image processing apparatus 100 according to embodiment. 記憶装置110及びCPU120の概略構成を示す図である。2 is a diagram illustrating a schematic configuration of a storage device 110 and a CPU 120. FIG. 全体処理の動作の例を示すフローチャートである。It is a flowchart which shows the example of operation | movement of a whole process. 部分領域検出処理の動作の例を示すフローチャートである。It is a flowchart which shows the example of operation | movement of a partial area | region detection process. メータを撮影した入力画像500の一例を示す図である。It is a figure which shows an example of the input image 500 which image | photographed the meter. 部分領域について説明するための図である。It is a figure for demonstrating a partial area | region. 管理テーブルのデータ構造の一例を示す図である。It is a figure which shows an example of the data structure of a management table. 数値認識処理の動作の例を示すフローチャートである。It is a flowchart which shows the example of operation | movement of a numerical value recognition process. 証拠画像保存処理の動作の例を示すフローチャートである。It is a flowchart which shows the example of operation | movement of evidence image preservation | save processing. 保存される証拠画像の組合せについて説明する図である。It is a figure explaining the combination of the evidence image preserve | saved. 他の処理回路230の概略構成を示す図である。6 is a diagram showing a schematic configuration of another processing circuit 230. FIG.
 以下、本開示の一側面に係る画像処理装置について図を参照しつつ説明する。但し、本開示の技術的範囲はそれらの実施の形態に限定されず、特許請求の範囲に記載された発明とその均等物に及ぶ点に留意されたい。 Hereinafter, an image processing apparatus according to an aspect of the present disclosure will be described with reference to the drawings. However, it should be noted that the technical scope of the present disclosure is not limited to the embodiments, and extends to the invention described in the claims and equivalents thereof.
 図1は、実施形態に従った画像処理装置100の概略構成の一例を示す図である。 FIG. 1 is a diagram illustrating an example of a schematic configuration of an image processing apparatus 100 according to the embodiment.
 画像処理装置100は、タブレットPC、多機能携帯電話(いわゆるスマートフォン)、携帯情報端末、ノートPC等の携帯可能な情報処理装置であり、そのユーザである作業者により使用される。画像処理装置100は、通信装置101と、入力装置102と、表示装置103と、撮像装置104と、記憶装置110と、CPU(Central Processing Unit)120と、処理回路130とを有する。以下、画像処理装置100の各部について詳細に説明する。 The image processing apparatus 100 is a portable information processing apparatus such as a tablet PC, a multi-function mobile phone (so-called smart phone), a portable information terminal, and a notebook PC, and is used by an operator who is the user. The image processing apparatus 100 includes a communication device 101, an input device 102, a display device 103, an imaging device 104, a storage device 110, a CPU (Central Processing Unit) 120, and a processing circuit 130. Hereinafter, each part of the image processing apparatus 100 will be described in detail.
 通信装置101は、主に2.4GHz帯、5GHz帯等を感受帯域とするアンテナを含む、通信インターフェース回路を有する。通信装置101は、アクセスポイント等との間でIEEE(The Institute of Electrical and Electronics Engineers, Inc.)802.11規格の無線通信方式に基づいて無線通信を行う。そして、通信装置101は、アクセスポイントを介して外部のサーバ装置(不図示)とデータの送受信を行う。通信装置101は、アクセスポイントを介してサーバ装置から受信したデータをCPU120に供給し、CPU120から供給されたデータをアクセスポイントを介してサーバ装置に送信する。なお、通信装置101は、外部の装置と通信できるものであればどのようなものであってもよい。例えば、通信装置101は、携帯電話通信方式に従って不図示の基地局装置を介してサーバ装置と通信するものでもよいし、有線LAN通信方式に従ってサーバ装置と通信するものでもよい。 The communication device 101 includes a communication interface circuit including an antenna mainly having a 2.4 GHz band, a 5 GHz band, or the like as a sensitive band. The communication apparatus 101 performs wireless communication with an access point or the like based on an IEEE (The Institute of Electrical and Electronics Electronics, Inc.) 802.11 standard wireless communication system. The communication device 101 transmits / receives data to / from an external server device (not shown) via an access point. The communication apparatus 101 supplies the data received from the server apparatus via the access point to the CPU 120, and transmits the data supplied from the CPU 120 to the server apparatus via the access point. The communication device 101 may be any device that can communicate with an external device. For example, the communication device 101 may communicate with a server device via a base station device (not shown) according to a mobile phone communication method, or may communicate with a server device according to a wired LAN communication method.
 入力装置102は、タッチパネル式の入力装置、キーボード、マウス等の入力デバイス及び入力デバイスから信号を取得するインターフェース回路を有する。入力装置102は、ユーザの入力を受け付け、ユーザの入力に応じた信号をCPU120に対して出力する。 The input device 102 has a touch panel type input device, an input device such as a keyboard and a mouse, and an interface circuit that acquires signals from the input device. The input device 102 receives a user input and outputs a signal corresponding to the user input to the CPU 120.
 表示装置103は、液晶、有機EL(Electro-Luminescence)等から構成されるディスプレイ及びディスプレイに画像データ又は各種の情報を出力するインターフェース回路を有する。表示装置103は、CPU120と接続されて、CPU120から出力された画像データをディスプレイに表示する。なお、タッチパネルディスプレイを用いて、入力装置102と表示装置103を一体に構成してもよい。 The display device 103 includes a display composed of liquid crystal, organic EL (Electro-Luminescence), and the like, and an interface circuit that outputs image data or various information to the display. The display device 103 is connected to the CPU 120 and displays the image data output from the CPU 120 on a display. Note that the input device 102 and the display device 103 may be integrally configured using a touch panel display.
 撮像装置104は、1次元又は2次元に配列されたCCD(Charge Coupled Device)からなる撮像素子を備える縮小光学系タイプの撮像センサと、A/D変換器とを有する。撮像装置104は、撮像部の一例であり、CPU120からの指示に従ってメータを順次撮影する(例えば30フレーム/秒)。撮像センサは、メータを撮影したアナログの画像信号を生成してA/D変換器に出力する。A/D変換器は、出力されたアナログの画像信号をアナログデジタル変換してデジタルの画像データを順次生成し、CPU120に出力する。なお、CCDの代わりにCMOS(Complementary Metal Oxide Semiconductor)からなる撮像素子を備える等倍光学系タイプのCIS(Contact Image Sensor)を利用してもよい。以下では、撮像装置104によりメータが撮影されて出力されたデジタルの画像データを入力画像と称する場合がある。 The imaging device 104 includes a reduction optical system type imaging sensor including an imaging element made up of a CCD (Charge Coupled Device) arranged one-dimensionally or two-dimensionally, and an A / D converter. The imaging device 104 is an example of an imaging unit, and sequentially captures a meter according to an instruction from the CPU 120 (for example, 30 frames / second). The image sensor generates an analog image signal obtained by photographing the meter and outputs the analog image signal to the A / D converter. The A / D converter performs analog-digital conversion on the output analog image signal to sequentially generate digital image data, and outputs the digital image data to the CPU 120. In place of the CCD, an equal magnification optical system type CIS (Contact Image Sensor) provided with an image sensor composed of CMOS (Complementary Metal Metal Oxide Semiconductor) may be used. Hereinafter, digital image data output by the meter captured by the imaging device 104 may be referred to as an input image.
 記憶装置110は、記憶部の一例である。記憶装置110は、RAM(Random Access Memory)、ROM(Read Only Memory)等のメモリ装置、ハードディスク等の固定ディスク装置、又はフレキシブルディスク、光ディスク等の可搬用の記憶装置等を有する。また、記憶装置110には、画像処理装置100の各種処理に用いられるコンピュータプログラム、データベース、テーブル等が格納される。コンピュータプログラムは、例えばCD-ROM(compact disk read only memory)、DVD-ROM(digital versatile disk read only memory)等のコンピュータ読み取り可能な可搬型記録媒体からインストールされてもよい。コンピュータプログラムは、公知のセットアッププログラム等を用いて記憶装置110にインストールされる。また、記憶装置110には、各入力画像に関する情報を管理する管理テーブルが格納される。 The storage device 110 is an example of a storage unit. The storage device 110 includes a memory device such as a RAM (Random Access Memory) and a ROM (Read Only Memory), a fixed disk device such as a hard disk, or a portable storage device such as a flexible disk and an optical disk. Further, the storage device 110 stores computer programs, databases, tables, and the like used for various processes of the image processing apparatus 100. The computer program may be installed from a computer-readable portable recording medium such as a CD-ROM (compact disk read only memory) or a DVD ROM (digital versatile disk read only memory). The computer program is installed in the storage device 110 using a known setup program or the like. The storage device 110 also stores a management table that manages information related to each input image.
 CPU120は、予め記憶装置110に記憶されているプログラムに基づいて動作する。CPU120は、汎用プロセッサであってもよい。なお、CPU120に代えて、DSP(digital signal processor)、LSI(large scale integration)等が用いられてよい。また、CPU120に代えて、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)等が用いられてもよい。 The CPU 120 operates based on a program stored in the storage device 110 in advance. The CPU 120 may be a general purpose processor. Instead of the CPU 120, a DSP (digital signal processor), an LSI (large scale integration), or the like may be used. Instead of the CPU 120, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or the like may be used.
 CPU120は、通信装置101、入力装置102、表示装置103、撮像装置104、記憶装置110及び処理回路130と接続され、これらの各部を制御する。CPU120は、通信装置101を介したデータ送受信制御、入力装置102の入力制御、表示装置103の表示制御、撮像装置104の撮像制御、記憶装置110の制御等を行う。さらに、CPU120は、撮像装置104により生成された入力画像に写っているメータ内の数値を認識するとともに、証拠画像を記憶装置110に記憶する。 The CPU 120 is connected to the communication device 101, the input device 102, the display device 103, the imaging device 104, the storage device 110, and the processing circuit 130, and controls these units. The CPU 120 performs data transmission / reception control via the communication device 101, input control of the input device 102, display control of the display device 103, imaging control of the imaging device 104, control of the storage device 110, and the like. Further, the CPU 120 recognizes a numerical value in the meter reflected in the input image generated by the imaging device 104 and stores the evidence image in the storage device 110.
 処理回路130は、撮像装置104から取得した入力画像に補正処理等の所定の画像処理を施す。なお、処理回路130として、LSI、DSP、ASIC又はFPGA等が用いられてもよい。 The processing circuit 130 performs predetermined image processing such as correction processing on the input image acquired from the imaging device 104. Note that an LSI, DSP, ASIC, FPGA, or the like may be used as the processing circuit 130.
 図2は、記憶装置110及びCPU120の概略構成を示す図である。 FIG. 2 is a diagram showing a schematic configuration of the storage device 110 and the CPU 120.
 図2に示すように、記憶装置110には、数値認識プログラム111、判定プログラム112及び制御プログラム113等の各プログラムが記憶される。これらの各プログラムは、プロセッサ上で動作するソフトウェアにより実装される機能モジュールである。CPU120は、記憶装置110に記憶された各プログラムを読み取り、読み取った各プログラムに従って動作することにより、数値認識部121、判定部122及び制御部123として機能する。 As shown in FIG. 2, the storage device 110 stores programs such as a numerical value recognition program 111, a determination program 112, and a control program 113. Each of these programs is a functional module implemented by software operating on the processor. The CPU 120 functions as the numerical value recognition unit 121, the determination unit 122, and the control unit 123 by reading each program stored in the storage device 110 and operating according to each read program.
 図3は、画像処理装置100による全体処理の動作の例を示すフローチャートである。 FIG. 3 is a flowchart showing an example of the operation of the entire process performed by the image processing apparatus 100.
 以下、図3に示したフローチャートを参照しつつ、画像処理装置100による全体処理の動作の例を説明する。なお、以下に説明する動作のフローは、予め記憶装置110に記憶されているプログラムに基づき主にCPU120により画像処理装置100の各要素と協働して実行される。 Hereinafter, an example of the operation of the entire process performed by the image processing apparatus 100 will be described with reference to the flowchart shown in FIG. The operation flow described below is mainly executed by the CPU 120 in cooperation with each element of the image processing apparatus 100 based on a program stored in the storage device 110 in advance.
 最初に、数値認識部121は、ユーザにより、入力装置102を用いてメータの撮影の開始を指示する撮影開始指示が入力され、入力装置102から撮影開始指示信号を受信すると、撮影開始指示を受け付ける(ステップS101)。数値認識部121は、撮影開始指示を受け付けると、画像処理に用いられる各情報の初期化、及び、撮像装置104の撮像サイズ、フォーカス等のパラメータ設定を実行し、撮像装置104にメータを撮影させて入力画像を生成させる。数値認識部121は、撮像装置104により順次生成された入力画像を記憶装置110に順次記憶する。 First, the numerical value recognition unit 121 receives a shooting start instruction when a user inputs a shooting start instruction for instructing the start of shooting with the input device 102 and receives a shooting start instruction signal from the input device 102. (Step S101). When receiving a shooting start instruction, the numerical value recognition unit 121 initializes information used for image processing, sets parameters such as the shooting size and focus of the imaging device 104, and causes the imaging device 104 to take a meter. To generate an input image. The numerical value recognition unit 121 sequentially stores input images sequentially generated by the imaging device 104 in the storage device 110.
 次に、数値認識部121は、部分領域検出処理を実行する(ステップS102)。数値認識部121は、部分領域検出処理において、撮像装置104によって生成された入力画像に写っているメータ内の数値の各桁に対応する部分領域を検出する。部分領域検出処理の詳細については後述する。 Next, the numerical value recognition unit 121 executes a partial area detection process (step S102). In the partial area detection process, the numerical value recognition unit 121 detects a partial area corresponding to each digit of the numerical value in the meter shown in the input image generated by the imaging device 104. Details of the partial area detection processing will be described later.
 次に、数値認識部121は、部分領域検出処理において、数値認識処理で使用可能な部分領域が検出されたか否かを判定する(ステップS103)。 Next, the numerical value recognition unit 121 determines whether a partial area that can be used in the numerical value recognition process is detected in the partial area detection process (step S103).
 数値認識処理で使用可能な部分領域が抽出されなかった場合、数値認識部121は、ステップS102に処理を戻し、新たに生成された入力画像に対して部分領域抽出処理を実行する。一方、数値認識処理で使用可能な部分領域が抽出された場合、数値認識部121は、数値認識処理を実行する(ステップS104)。数値認識部121は、数値認識処理において、順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、メータ内の数値を認識する。数値認識処理の詳細については後述する。 When the usable partial area is not extracted in the numerical value recognition process, the numerical value recognition unit 121 returns the process to step S102, and executes the partial area extraction process on the newly generated input image. On the other hand, when a partial area that can be used in the numerical value recognition process is extracted, the numerical value recognition unit 121 executes the numerical value recognition process (step S104). In the numerical value recognition processing, the numerical value recognition unit 121 identifies and counts the numerical values in the meter shown in the sequentially generated input images, and recognizes the numerical values in the meter based on the totaled result. Details of the numerical value recognition processing will be described later.
 次に、数値認識部121は、数値認識処理において、メータ内の数値を認識できたか否かを判定する(ステップS105)。 Next, the numerical value recognition unit 121 determines whether or not the numerical value in the meter has been recognized in the numerical value recognition process (step S105).
 メータ内の数値を認識できなかった場合、数値認識部121は、ステップS102に処理を戻し、新たに生成された入力画像に対してステップS102~S105の処理を繰り返す。一方、メータ内の数値を認識できた場合、判定部122及び制御部123は、証拠画像保存処理を実行する(ステップS106)。証拠画像保存処理において、判定部122は、各部分領域が鮮明であるか否かを判定する。また、制御部123は、各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像に対応する画像を証拠画像として、数値認識部121が認識した数値を関連付けて記憶装置110に記憶する。証拠画像保存処理の詳細については後述する。 When the numerical value in the meter cannot be recognized, the numerical value recognition unit 121 returns the process to step S102, and repeats the processes of steps S102 to S105 for the newly generated input image. On the other hand, when the numerical value in the meter can be recognized, the determination unit 122 and the control unit 123 execute evidence image storage processing (step S106). In the evidence image storage process, the determination unit 122 determines whether each partial area is clear. Further, for each digit, the control unit 123 selects an input image in which the partial area corresponding to each digit is determined to be clear, and uses the image corresponding to the selected input image as an evidence image. The numerical value recognized by 121 is associated and stored in the storage device 110. Details of the evidence image storage process will be described later.
 次に、制御部123は、数値認識部121が認識した数値、及び/又は、制御部123が選択した証拠画像を表示装置103に表示し(ステップS107)、一連のステップを終了する。また、制御部123は、数値認識部121が認識した数値、及び/又は、制御部123が選択した証拠画像を通信装置101を介してサーバ装置に送信してもよい。 Next, the control unit 123 displays the numerical value recognized by the numerical value recognition unit 121 and / or the evidence image selected by the control unit 123 on the display device 103 (step S107), and ends a series of steps. Further, the control unit 123 may transmit the numerical value recognized by the numerical value recognition unit 121 and / or the evidence image selected by the control unit 123 to the server device via the communication device 101.
 図4は、部分領域検出処理の動作の例を示すフローチャートである。図4に示す動作のフローは、図3に示すフローチャートのステップS102において実行される。 FIG. 4 is a flowchart showing an example of the operation of the partial area detection process. The operation flow shown in FIG. 4 is executed in step S102 of the flowchart shown in FIG.
 最初に、数値認識部121は、入力画像からプレート枠を検出する(ステップS201)。 First, the numerical value recognition unit 121 detects a plate frame from the input image (step S201).
 図5Aは、メータ(装置)を撮影した入力画像500の一例を示す図である。 FIG. 5A is a diagram showing an example of an input image 500 obtained by photographing a meter (device).
 図5Aに示すように、一般に、メータは黒色の筐体501を有し、筐体501の内部に白色のプレート502を有する。プレート502は、ガラス(不図示)を介して目視可能になっており、プレート502には、メータが計測した電力量等の数値が示されるメータ部分503が配置される。メータ部分503において、数値は白色で示され、背景は黒色で示されている。数値認識部121は、プレート502の外縁をプレート枠として検出する。なお、以下では、計測する数値の桁数が4であるメータを例にして説明するが、メータが計測する数値の桁数は2以上であればいくつでもよい。 As shown in FIG. 5A, generally, a meter has a black casing 501 and a white plate 502 inside the casing 501. The plate 502 is visible through glass (not shown), and a meter portion 503 on which a numerical value such as the amount of electric power measured by the meter is displayed is disposed on the plate 502. In the meter portion 503, the numerical value is shown in white and the background is shown in black. The numerical value recognition unit 121 detects the outer edge of the plate 502 as a plate frame. In the following description, a meter whose number of numerical values to be measured is four will be described as an example, but any number of numerical values to be measured by the meter may be any number as long as it is two or more.
 数値認識部121は、入力画像内の画素の水平及び垂直方向の両隣の画素又はその画素から所定距離だけ離れた複数の画素の輝度値又は色値(R値、B値、G値)の差の絶対値が第1閾値を越える場合、その画素をエッジ画素として抽出する。数値認識部121は、ハフ変換又は最小二乗法等を用いて、抽出した各エッジ画素の近傍を通過する直線を抽出し、抽出した各直線のうち二本ずつが略直交する四本の直線から構成される矩形の内、最も大きい矩形をプレート枠として検出する。または、数値認識部121は、抽出した各エッジ画素が他のエッジ画素と連結しているか否かを判定し、連結しているエッジ画素を一つのグループとしてラベリングする。数値認識部121は、抽出したグループの内、最も面積が大きいグループで囲まれる領域の外縁をプレート枠として検出してもよい。 The numerical value recognition unit 121 is a difference between luminance values or color values (R value, B value, G value) of pixels adjacent to each other in the horizontal and vertical directions in the input image or a plurality of pixels separated from the pixels by a predetermined distance. If the absolute value of exceeds the first threshold, the pixel is extracted as an edge pixel. The numerical value recognition unit 121 extracts a straight line that passes through the vicinity of each extracted edge pixel by using the Hough transform or the least squares method, and two of the extracted straight lines are obtained from four straight lines that are approximately orthogonal to each other. Among the rectangles to be configured, the largest rectangle is detected as a plate frame. Alternatively, the numerical value recognition unit 121 determines whether each extracted edge pixel is connected to other edge pixels, and labels the connected edge pixels as one group. The numerical value recognition unit 121 may detect, as a plate frame, an outer edge of a region surrounded by the largest group among the extracted groups.
 なお、数値認識部121は、筐体501の色と、プレート502の色の違いを利用してプレート枠を検出してもよい。数値認識部121は、各画素の輝度値又は色値が第2閾値未満であり(黒色を示し)、その画素に右側に隣接する画素又はその画素から右側に所定距離離れた画素の輝度値又は色値が第2閾値以上である(白色を示す)場合、その画素を左端エッジ画素として抽出する。第2閾値は黒色を示す値と白色を示す値の中間の値に設定される。同様に、数値認識部121は、各画素の輝度値又は色値が第2閾値未満であり、その画素に左側に隣接する画素又はその画素から左側に所定距離離れた画素の輝度値又は色値が第2閾値以上である場合、その画素を右端エッジ画素として抽出する。同様に、数値認識部121は、各画素の輝度値又は色値が第2閾値未満であり、その画素に下側に隣接する画素又はその画素から下側に所定距離離れた画素の輝度値又は色値が第2閾値以上である場合、その画素を上端エッジ画素として抽出する。同様に、数値認識部121は、各画素の輝度値又は色値が第2閾値未満であり、その画素に上側に隣接する画素又はその画素から上側に所定距離離れた画素の輝度値又は色値が第2閾値以上である場合、その画素を下端エッジ画素として抽出する。 Note that the numerical value recognition unit 121 may detect the plate frame using the difference between the color of the housing 501 and the color of the plate 502. The numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value (shows black), and a pixel adjacent to the pixel on the right side or a pixel separated from the pixel by a predetermined distance on the right side If the color value is greater than or equal to the second threshold (indicating white), that pixel is extracted as the left edge pixel. The second threshold value is set to an intermediate value between the value indicating black and the value indicating white. Similarly, the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value and that is adjacent to the pixel on the left side or a pixel that is a predetermined distance away from the pixel on the left side. Is greater than or equal to the second threshold, the pixel is extracted as the right edge pixel. Similarly, the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value, and the luminance value or the pixel of the pixel adjacent to the pixel on the lower side or the pixel separated by a predetermined distance from the pixel on the lower side. If the color value is greater than or equal to the second threshold, the pixel is extracted as the upper edge pixel. Similarly, the numerical value recognition unit 121 has a luminance value or a color value of each pixel that is less than the second threshold value and is adjacent to the pixel on the upper side or a pixel that is a predetermined distance above the pixel. Is greater than or equal to the second threshold, the pixel is extracted as the lower edge pixel.
 数値認識部121は、ハフ変換又は最小二乗法等を用いて、抽出した左端エッジ画素、右端エッジ画素、上端エッジ画素及び下端エッジ画素のそれぞれを連結した直線を抽出し、抽出した各直線から構成される矩形をプレート枠として検出する。 The numerical value recognition unit 121 extracts a straight line connecting the extracted left end edge pixel, right end edge pixel, upper end edge pixel, and lower end edge pixel by using Hough transform or least square method, and is configured from the extracted straight lines. The rectangle to be detected is detected as a plate frame.
 次に、数値認識部121は、検出したプレート枠内の領域から数値背景枠を検出する(ステップS202)。数値認識部121は、メータ部分503の外縁を数値背景枠として検出する。 Next, the numerical value recognition unit 121 detects a numerical background frame from the detected area within the plate frame (step S202). The numerical value recognition unit 121 detects the outer edge of the meter portion 503 as a numerical value background frame.
 数値認識部121は、メータ部分503を含むプレート502が写っている画像が入力された場合に、メータ部分503の外縁の位置情報を出力するように事前学習された識別器により、数値背景枠を検出する。この識別器は、例えばディープラーニング等により、メータを撮影した複数の画像を用いて事前学習され、予め記憶装置110に記憶される。数値認識部121は、検出したプレート枠を含む画像を識別器に入力し、識別器から出力された位置情報を取得することにより、数値背景枠を検出する。 The numerical value recognition unit 121 uses a discriminator that has been pre-learned to output the position information of the outer edge of the meter portion 503 when an image showing the plate 502 including the meter portion 503 is input. To detect. This discriminator is pre-learned using a plurality of images obtained by photographing the meter, for example, by deep learning, and stored in the storage device 110 in advance. The numerical value recognition unit 121 detects the numerical value background frame by inputting an image including the detected plate frame to the classifier and acquiring the position information output from the classifier.
 なお、数値認識部121は、プレート枠を検出する場合と同様に、入力画像内のエッジ画素に基づいて、数値背景枠を検出してもよい。数値認識部121は、入力画像のプレート枠を含む領域からエッジ画素を抽出し、抽出した各エッジ画素の近傍を通過する直線を抽出し、抽出した各直線のうち二本ずつが略直交する四本の直線から構成される矩形の内、最も大きい矩形を数値背景枠として検出する。または、数値認識部121は、抽出した各エッジ画素が相互に連結するグループの内、最も面積が大きいグループで囲まれる領域の外縁を数値背景枠として検出してもよい。 Note that the numerical value recognition unit 121 may detect the numerical value background frame based on the edge pixels in the input image, as in the case of detecting the plate frame. The numerical value recognition unit 121 extracts edge pixels from the region including the plate frame of the input image, extracts straight lines passing through the vicinity of each extracted edge pixel, and two of the extracted straight lines are approximately four orthogonal to each other. The largest rectangle among the rectangles composed of straight lines is detected as a numerical background frame. Or the numerical value recognition part 121 may detect the outer edge of the area | region enclosed by the group with the largest area among the groups which each extracted edge pixel connects mutually as a numerical value background frame.
 または、数値認識部121は、プレート枠を検出する場合と同様に、プレート502の色と、メータ部分503の色の違いを利用してプレート枠を検出してもよい。数値認識部121は、各画素の輝度値又は色値が第2閾値以上であり(白色を示し)、その画素に右側に隣接する画素又はその画素から右側に所定距離離れた画素の輝度値又は色値が第2閾値未満である(黒色を示す)場合、その画素を左端エッジ画素として抽出する。同様にして、数値認識部121は、右端エッジ画素、上端エッジ画素及び下端エッジ画素を抽出する。数値認識部121は、ハフ変換又は最小二乗法等を用いて、抽出した左端エッジ画素、右端エッジ画素、上端エッジ画素及び下端エッジ画素のそれぞれの近傍を通過する直線を抽出し、抽出した各直線から構成される矩形を数値背景枠として検出する。 Alternatively, the numerical value recognition unit 121 may detect the plate frame using the difference between the color of the plate 502 and the color of the meter portion 503, as in the case of detecting the plate frame. The numerical value recognition unit 121 has a luminance value or a color value of each pixel equal to or greater than a second threshold value (indicating white), and the luminance value of a pixel adjacent to the pixel on the right side or a pixel separated by a predetermined distance from the pixel to the right side If the color value is less than the second threshold value (indicating black), the pixel is extracted as the left edge pixel. Similarly, the numerical value recognition unit 121 extracts the right edge pixel, the upper edge pixel, and the lower edge pixel. The numerical value recognition unit 121 extracts straight lines that pass through the vicinity of the extracted left end edge pixel, right end edge pixel, upper end edge pixel, and lower end edge pixel by using the Hough transform or the least square method. Is detected as a numeric background frame.
 また、メータのプレート502内に数値背景枠が存在する位置を示す目印504が示されている場合、数値認識部121は、目印504を検出し、水平及び垂直方向において各目印504で挟まれた領域内で数値背景枠を検出してもよい。 In addition, when the mark 504 indicating the position where the numerical value background frame is present in the meter plate 502 is shown, the numerical value recognition unit 121 detects the mark 504 and is sandwiched between the marks 504 in the horizontal and vertical directions. A numerical background frame may be detected within the region.
 次に、数値認識部121は、検出した数値背景枠内の領域から、メータ内の数値の各桁に対応する部分領域を検出する(ステップS203)。 Next, the numerical value recognition unit 121 detects a partial region corresponding to each digit of the numerical value in the meter from the detected region in the numerical value background frame (step S203).
 図5Bは、部分領域について説明するための図である。 FIG. 5B is a diagram for explaining a partial region.
 図5Bに示す画像510は、入力画像500から検出されたメータ部分503の数値背景枠を示す。数値認識部121は、数値背景枠内の数値の各桁値を含む各矩形領域511~514を部分領域として検出する。 An image 510 shown in FIG. 5B shows a numerical background frame of the meter portion 503 detected from the input image 500. The numerical value recognition unit 121 detects the rectangular areas 511 to 514 including the numerical value values in the numerical value background frame as partial areas.
 数値認識部121は、メータ部分503が写っている画像が入力された場合に、メータ部分503内の数値の各桁値を含む各矩形領域の位置情報を出力するように事前学習された識別器により、部分領域を検出する。この識別器は、例えばディープラーニング等により、メータを撮影した複数の画像を用いて事前学習され、予め記憶装置110に記憶される。数値認識部121は、検出した数値背景枠を含む画像を識別器に入力し、識別器から出力された位置情報を取得することにより部分領域を検出する。 The numerical value recognition unit 121 is a classifier that has been pre-learned so as to output position information of each rectangular area including each digit value of the numerical value in the meter portion 503 when an image in which the meter portion 503 is reflected is input. Thus, the partial area is detected. This discriminator is pre-learned using a plurality of images obtained by photographing the meter, for example, by deep learning, and stored in the storage device 110 in advance. The numerical value recognition unit 121 detects a partial region by inputting an image including the detected numerical value background frame to a classifier and acquiring position information output from the classifier.
 なお、数値認識部121は、プレート枠を検出する場合と同様に、入力画像内のエッジ画素に基づいて、数値背景枠を検出してもよい。数値認識部121は、入力画像の数値背景枠を含む領域からエッジ画素を抽出し、抽出した各エッジ画素の近傍を通過する直線を抽出し、抽出した各直線のうち二本ずつが略直交する四本の直線から構成される矩形の内、最も大きい矩形の領域を検出する。または、数値認識部121は、抽出した各エッジ画素が相互に連結するグループの内、最も面積が大きいグループで囲まれる領域を検出する。数値認識部121は、公知のOCR(Optical Character Recognition)技術を利用して、検出した各領域から1桁の数字を検出し、1桁の数字を検出できた場合、その領域を部分領域として検出する。 Note that the numerical value recognition unit 121 may detect the numerical value background frame based on the edge pixels in the input image, as in the case of detecting the plate frame. The numerical value recognition unit 121 extracts edge pixels from the area including the numerical value background frame of the input image, extracts straight lines passing through the vicinity of the extracted edge pixels, and two of the extracted straight lines are approximately orthogonal to each other. The largest rectangular area is detected from the four straight lines. Alternatively, the numerical value recognition unit 121 detects a region surrounded by a group having the largest area among the groups in which the extracted edge pixels are connected to each other. The numerical value recognition unit 121 detects a single-digit number from each detected region using a known OCR (Optical Character Recognition) technology, and detects a single-digit number as a partial region. To do.
 なお、図5Bに示すように、入力画像500から検出されたメータ部分503には数値が適切に撮像されていない可能性がある。図5Bに示す画像520は、メータ部分503全体が不鮮明であり、数値部分と背景部分の輝度値又は色値の差が小さい画像を示す。画像520では、各矩形領域521~524は部分領域として検出されない。図5Bに示す画像530は、メータの前面を覆うガラス部分に照明等の環境光が写り込むことにより、メータ部分503全体に外乱光が写り、全ての数値が識別できない画像を示す。画像530では、各矩形領域531~534は部分領域として検出されない。図5Bに示す画像540は、メータ部分503の一部に外乱光が写り、外乱光が写っている数値が識別できない画像を示す。画像540では、各矩形領域541、543、544は部分領域として検出されるが、矩形領域542は部分領域として検出されない。図5Bに示す画像550は、メータ部分503の一部に影が写っており、影が写っている数値が誤って認識される画像を示す。画像550では、各矩形領域551~554は部分領域として検出される。 As shown in FIG. 5B, there is a possibility that the numerical value is not properly captured in the meter portion 503 detected from the input image 500. An image 520 shown in FIG. 5B shows an image in which the entire meter portion 503 is unclear and the difference in luminance value or color value between the numerical value portion and the background portion is small. In the image 520, the rectangular areas 521 to 524 are not detected as partial areas. An image 530 shown in FIG. 5B shows an image in which ambient light such as illumination is reflected on the glass portion covering the front surface of the meter, so that disturbance light is reflected on the entire meter portion 503 and all numerical values cannot be identified. In the image 530, the rectangular areas 531 to 534 are not detected as partial areas. An image 540 shown in FIG. 5B shows an image in which disturbance light is reflected in a part of the meter portion 503 and a numerical value in which disturbance light is reflected cannot be identified. In the image 540, the rectangular areas 541, 543, and 544 are detected as partial areas, but the rectangular area 542 is not detected as a partial area. An image 550 shown in FIG. 5B shows an image in which a shadow is reflected in a part of the meter portion 503 and a numerical value where the shadow is reflected is erroneously recognized. In the image 550, the rectangular areas 551 to 554 are detected as partial areas.
 次に、数値認識部121は、検出した各部分領域に桁番号を割り当てる(ステップS204)。数値認識部121は、数値背景枠内の領域を、水平方向に、メータが示す数値の桁数で等分に分割し、右側の領域から昇順に桁番号を割り当てる(最も右側の領域から順に1、2、3、4の桁番号を割り当てる)。数値認識部121は、検出した各部分領域に、各部分領域の中心位置が含まれる領域に割り当てられた桁番号を割り当てる。 Next, the numerical value recognition unit 121 assigns a digit number to each detected partial area (step S204). The numerical value recognition unit 121 divides the region in the numerical value background frame equally in the horizontal direction by the number of digits of the numerical value indicated by the meter, and assigns the digit numbers in ascending order from the right region (1 from the rightmost region in order). 2, 3, 4 digit numbers are assigned). The numerical value recognition unit 121 assigns a digit number assigned to an area including the center position of each partial area to each detected partial area.
 次に、数値認識部121は、検出した各部分領域が数値認識処理で使用可能であるか否かを判定する(ステップS205)。数値認識部121は、各部分領域にボケ又はテカリが含まれるか否かにより、各部分領域が数値認識処理で使用可能であるか否かを判定する。ボケとは、撮像装置104の焦点ずれにより、画像内の各画素の輝度値の差が小さくなっている領域、又は、ユーザの手ぶれによって画像内の複数の画素に同一物が写り、画像内の各画素の輝度値の差が小さくなっている領域を意味する。テカリとは、外乱光等の影響により、画像内の所定領域の画素の輝度値が一定の値に飽和(白飛び)している領域を意味する。 Next, the numerical value recognition unit 121 determines whether or not each detected partial area can be used in the numerical value recognition process (step S205). The numerical value recognition unit 121 determines whether or not each partial area can be used in the numerical value recognition process based on whether or not each partial area includes blur or shine. A blur is a region where the difference in luminance value of each pixel in the image is small due to defocusing of the imaging device 104, or the same object is reflected in a plurality of pixels in the image due to a user's camera shake. It means an area where the difference in luminance value of each pixel is small. The term “shine” means a region where the luminance value of a pixel in a predetermined region in the image is saturated (out-of-white) due to the influence of disturbance light or the like.
 数値認識部121は、画像が入力された場合に、入力された画像にボケが含まれる度合いを示すボケ度を出力するように事前学習された識別器により、各部分領域にボケが含まれるか否かを判定する。この識別器は、例えばディープラーニング等により、メータを撮影し且つボケが含まれない画像を用いて事前学習され、予め記憶装置110に記憶される。なお、この識別器は、メータを撮影し且つボケが含まれる画像をさらに用いて事前学習されていてもよい。数値認識部121は、検出した部分領域を含む画像を識別器に入力し、識別器から出力されたボケ度が第3閾値以上であるか否かにより、部分領域にボケが含まれるか否かを判定する。 Whether the numerical value recognition unit 121 includes blur in each partial region by a discriminator that is pre-learned to output a blur degree indicating the degree of blur included in the input image when the image is input. Determine whether or not. This discriminator is pre-learned using an image obtained by photographing a meter and not including blur by, for example, deep learning, and is stored in the storage device 110 in advance. Note that this discriminator may be pre-learned using an image obtained by photographing a meter and including blur. The numerical value recognition unit 121 inputs an image including the detected partial area to the classifier, and whether or not the partial area includes blur depending on whether or not the degree of blur output from the classifier is equal to or greater than the third threshold. Determine.
 または、数値認識部121は、部分領域に含まれる各画素の輝度値のエッジ強度に基づいて、各部分領域にボケが含まれるか否かを判定してもよい。数値認識部121は、部分領域内の画素の水平もしくは垂直方向の両隣の画素又はその画素から所定距離だけ離れた複数の画素の輝度値の差の絶対値を、その画素のエッジ強度として算出する。数値認識部121は、部分領域内の各画素について算出したエッジ強度の平均値が第4閾値以下であるか否かにより、部分領域にボケが含まれるか否かを判定する。 Alternatively, the numerical value recognition unit 121 may determine whether or not each partial area includes blur based on the edge strength of the luminance value of each pixel included in the partial area. The numerical value recognition unit 121 calculates the absolute value of the difference between the luminance values of the pixels adjacent to each other in the horizontal or vertical direction in the partial region or a plurality of pixels separated from the pixel by a predetermined distance as the edge strength of the pixel. . The numerical value recognition unit 121 determines whether or not the partial area includes blur depending on whether or not the average value of the edge intensities calculated for each pixel in the partial area is equal to or less than the fourth threshold value.
 または、数値認識部121は、部分領域に含まれる各画素の輝度値の分布に基づいて、各部分領域にボケが含まれるか否かを判定してもよい。数値認識部121は、部分領域内の各画素の輝度値のヒストグラムを生成し、数値(白色)を示す輝度値の範囲と、背景(黒色)を示す輝度値の範囲のそれぞれにおいて極大値を検出し、各極大値の半値幅の平均値を算出する。数値認識部121は、算出した各極大値の半値幅の平均値が第5閾値以上であるか否かにより、部分領域にボケが含まれるか否かを判定する。 Alternatively, the numerical value recognition unit 121 may determine whether or not each partial area includes blur based on the distribution of luminance values of the pixels included in the partial area. The numerical value recognition unit 121 generates a histogram of the luminance value of each pixel in the partial region, and detects a local maximum value in each of the luminance value range indicating the numerical value (white) and the luminance value range indicating the background (black). Then, the average value of the full width at half maximum of each maximum value is calculated. The numerical value recognition unit 121 determines whether or not the partial area includes blur depending on whether or not the average value of the calculated half-value widths of the local maximum values is equal to or greater than the fifth threshold value.
 また、数値認識部121は、画像が入力された場合に、入力された画像にテカリが含まれる度合いを示すテカリ度を出力するように事前学習された識別器により、各部分領域にテカリが含まれるか否かを判定する。この識別器は、例えばディープラーニング等により、メータを撮影し且つテカリが含まれない画像を用いて事前学習され、予め記憶装置110に記憶される。なお、この識別器は、メータを撮影し且つテカリが含まれる画像をさらに用いて事前学習されていてもよい。数値認識部121は、検出した部分領域を含む画像を識別器に入力し、識別器から出力されたテカリ度が第6閾値以上であるか否かにより、部分領域にテカリが含まれるか否かを判定する。 In addition, the numerical value recognition unit 121 includes shine in each partial region by a discriminator that is pre-learned so as to output a degree of shine indicating the degree of shine included in the input image when an image is input. It is determined whether or not. This discriminator is pre-learned using, for example, an image obtained by photographing a meter and not including shine by deep learning or the like, and is stored in the storage device 110 in advance. Note that this discriminator may be pre-learned using an image obtained by photographing a meter and including shine. The numerical value recognition unit 121 inputs an image including the detected partial area to the classifier, and whether or not the partial area includes shine depending on whether or not the degree of shine output from the classifier is equal to or greater than a sixth threshold. Determine.
 または、数値認識部121は、部分領域に含まれる各画素の輝度値に基づいて、各部分領域にテカリが含まれるか否かを判定してもよい。数値認識部121は、部分領域内の画素の内、輝度値が第7閾値以上(白色)である画素の数を算出し、算出した数が第8閾値以上であるか否かにより、部分領域にテカリが含まれるか否かを判定する。 Alternatively, the numerical value recognition unit 121 may determine whether or not each partial area includes shine based on the luminance value of each pixel included in the partial area. The numerical value recognition unit 121 calculates the number of pixels whose luminance value is greater than or equal to the seventh threshold value (white) among the pixels in the partial region, and determines whether or not the calculated number is equal to or greater than the eighth threshold value. It is determined whether or not shine is included.
 または、数値認識部121は、部分領域に含まれる各画素の輝度値の分布に基づいて、各部分領域にテカリが含まれるか否かを判定してもよい。数値認識部121は、部分領域内の各画素の輝度値のヒストグラムを生成し、第7閾値以上の領域に分布された画素の数が第8閾値以上であるか否かにより、部分領域にテカリが含まれるか否かを判定する。 Alternatively, the numerical value recognition unit 121 may determine whether or not each partial area includes a shine based on the distribution of luminance values of each pixel included in the partial area. The numerical value recognition unit 121 generates a histogram of the luminance value of each pixel in the partial area, and determines whether or not the partial area depends on whether the number of pixels distributed in the area equal to or higher than the seventh threshold is equal to or higher than the eighth threshold. Whether or not is included is determined.
 なお、上記した各閾値及び各範囲は、事前の実験により、予め設定される。 In addition, each threshold value and each range described above are set in advance by a prior experiment.
 次に、数値認識部121は、使用可能であると判定した部分領域の情報を記憶装置110の管理テーブルに記憶し(ステップS206)、一連のステップを終了する。 Next, the numerical value recognition unit 121 stores the information of the partial area determined to be usable in the management table of the storage device 110 (step S206), and ends a series of steps.
 図6は、管理テーブルのデータ構造の一例を示す図である。 FIG. 6 is a diagram showing an example of the data structure of the management table.
 管理テーブルには、各入力画像について、各入力画像の識別情報(入力画像ID)、証拠用画像情報、部分領域情報及び桁値等の情報が関連付けて記憶される。入力画像IDは、入力画像毎に一意に割り当てられる。証拠用画像情報は、証拠用画像の格納先等を示す情報である。証拠用画像は、証拠画像として保存される画像であり、証拠用画像として、例えば入力画像から数値背景枠内の領域を切出した画像が使用される。証拠用画像は、入力画像の少なくとも一部の一例である。部分領域情報は、部分領域画像の格納先等を示す情報であり、各桁に対応する部分領域毎に記憶される。部分領域画像は、入力画像から部分領域を切出した画像である。桁値は、各部分領域において特定された桁値であり、各桁毎に記憶される。 In the management table, for each input image, information such as identification information (input image ID), evidence image information, partial area information, and digit value of each input image is stored in association with each other. The input image ID is uniquely assigned for each input image. The evidence image information is information indicating the storage destination of the evidence image. The evidence image is an image stored as an evidence image. As the evidence image, for example, an image obtained by cutting out an area within the numerical background frame from the input image is used. The evidence image is an example of at least a part of the input image. The partial area information is information indicating the storage destination of the partial area image, and is stored for each partial area corresponding to each digit. The partial area image is an image obtained by cutting out a partial area from the input image. The digit value is a digit value specified in each partial area, and is stored for each digit.
 数値認識部121は、使用可能であると判定した各部分領域を入力画像から切出して部分領域画像を生成し、記憶装置110に記憶するとともに、その格納先を部分領域情報として管理テーブルに記憶する。なお、数値認識部121は、使用可能であると判定した各部分領域を入力画像から切出すのではなく、入力画像内における各部分領域の位置情報を部分領域情報として管理テーブルに記憶してもよい。 The numerical value recognition unit 121 cuts out each partial area determined to be usable from the input image, generates a partial area image, stores the partial area image in the storage device 110, and stores the storage destination in the management table as partial area information. . Note that the numerical value recognition unit 121 does not cut out each partial area determined to be usable from the input image, but may store position information of each partial area in the input image as partial area information in the management table. Good.
 また、数値認識部121は、入力画像内に、使用可能であると判定した部分領域が存在する場合、入力画像から数値背景枠内の領域を切出して証拠用画像を生成し、記憶装置110に記憶するとともに、その格納先を証拠用画像情報として管理テーブルに記憶する。なお、証拠用画像として、入力画像からプレート枠内の領域を切出した画像、又は、入力画像自体が使用されてもよい。その場合、数値認識部121は、入力画像からプレート枠内の領域を切出した画像、又は、入力画像自体を証拠用画像として記憶装置110に記憶し、その格納先を証拠用画像情報として管理テーブルに記憶する。 In addition, when there is a partial area determined to be usable in the input image, the numerical value recognition unit 121 generates an evidence image by cutting out the area in the numerical background frame from the input image, and stores it in the storage device 110. In addition to storing, the storage location is stored in the management table as evidence image information. Note that an image obtained by cutting out an area within the plate frame from the input image or the input image itself may be used as the evidence image. In that case, the numerical value recognition unit 121 stores the image obtained by cutting out the region in the plate frame from the input image or the input image itself as an evidence image in the storage device 110, and stores the storage location as evidence image information in the management table. To remember.
 図7は、数値認識処理の動作の例を示すフローチャートである。図7に示す動作のフローは、図3に示すフローチャートのステップS104において実行される。図7のステップS301~S309の各処理は、数値認識処理で使用可能であると判定された各部分領域に対して実行される。即ち、図7のステップS301~S309の各処理は、撮像装置104により順次生成された入力画像に写っているメータ内の数値の各桁毎に実行される。 FIG. 7 is a flowchart showing an example of the operation of numerical value recognition processing. The operation flow shown in FIG. 7 is executed in step S104 of the flowchart shown in FIG. Each process of steps S301 to S309 in FIG. 7 is executed for each partial area determined to be usable in the numerical value recognition process. That is, each process of steps S301 to S309 in FIG. 7 is executed for each digit of the numerical value in the meter that is shown in the input image sequentially generated by the imaging device 104.
 最初に、数値認識部121は、処理対象となる部分領域に割り当てられた桁番号の桁値が確定済みであるか否かを判定する(ステップS301)。 First, the numerical value recognition unit 121 determines whether or not the digit value of the digit number assigned to the partial area to be processed has been confirmed (step S301).
 その桁番号の桁値が確定済みである場合、数値認識部121は、ステップS310へ処理を移行する。一方、その桁番号の桁値が確定していない場合、数値認識部121は、部分領域に写っている桁値を特定する(ステップS302)。 If the digit value of the digit number has been confirmed, the numerical value recognition unit 121 moves the process to step S310. On the other hand, when the digit value of the digit number has not been determined, the numerical value recognition unit 121 identifies the digit value reflected in the partial area (step S302).
 数値認識部121は、一桁の数値が写っている画像が入力された場合に、その画像に写っている数値を出力するように事前学習された識別器により、部分領域に写っている桁値を特定する。この識別器は、例えばディープラーニング等により、メータ内の各数値を撮影した複数の画像を用いて事前学習され、予め記憶装置110に記憶される。数値認識部121は、部分領域が含まれる画像を識別器に入力し、識別器から出力された数値を、部分領域に写っている桁値として特定する。なお、数値認識部121は、公知のOCR技術を利用して、部分領域に写っている桁値を特定してもよい。 The numerical value recognition unit 121, when an image including a single-digit numerical value is input, the digit value reflected in the partial region by a classifier that has been previously learned so as to output the numerical value reflected in the image. Is identified. This discriminator is pre-learned using a plurality of images obtained by photographing each numerical value in the meter, for example, by deep learning, and stored in the storage device 110 in advance. The numerical value recognition unit 121 inputs an image including the partial area to the discriminator, and specifies the numerical value output from the discriminator as a digit value reflected in the partial area. Note that the numerical value recognition unit 121 may specify a digit value shown in the partial area using a known OCR technique.
 次に、数値認識部121は、特定した数値を整数化する(ステップS303)。一般に、メータは、側面に複数の数字が印字された円柱状のドラムを回転させることにより、測定した数値をプレート上に表示させる。したがって、ドラムが回転して印字された数字が移動しているときにメータが撮影されると、部分領域には複数の数値が含まれる可能性がある。そのため、識別器は、部分領域に複数の数値が含まれる場合、各数値が写っている領域に応じて、小数の数値を出力するように事前学習されている。数値認識部121は、識別器から出力された数値の小数部分を切り上げ、切り落とし、又は、四捨五入して、その数値を整数化する。 Next, the numerical value recognition unit 121 converts the specified numerical value into an integer (step S303). In general, a meter displays a measured value on a plate by rotating a cylindrical drum having a plurality of numbers printed on its side surface. Therefore, if the meter is photographed while the printed numbers are moving as the drum rotates, the partial area may include a plurality of numerical values. Therefore, the discriminator is pre-learned so as to output a decimal number according to the area in which each numerical value is shown when the partial area includes a plurality of numerical values. The numerical value recognition unit 121 rounds up, rounds, or rounds off the decimal part of the numerical value output from the discriminator, and converts the numerical value into an integer.
 次に、数値認識部121は、特定した数値を、記憶装置110の管理テーブルに、その部分領域に割り当てられた桁番号の桁値として記憶する(ステップS304)。 Next, the numerical value recognition unit 121 stores the specified numerical value in the management table of the storage device 110 as the digit value of the digit number assigned to the partial area (step S304).
 次に、数値認識部121は、所定数(例えば10)以上の入力画像に対して数値認識処理が実行されたか否かを判定する(ステップS305)。 Next, the numerical value recognition unit 121 determines whether or not numerical value recognition processing has been performed on a predetermined number (for example, 10) or more input images (step S305).
 まだ所定数以上の入力画像に対して数値認識処理が実行されていない場合、数値認識部121は、ステップS310へ処理を移行する。一方、所定数以上の入力画像に対して数値認識処理が実行された場合、数値認識部121は、直近の所定数の入力画像に対して特定した桁値の中の最頻値を特定する(ステップS306)。数値認識部121は、管理テーブルを参照し、処理対象となる部分領域に割り当てられた桁番号の直近の所定数の桁値の中で最も多く記憶されている桁値を最頻値として特定する。 If the numerical value recognition process has not yet been executed for a predetermined number or more of input images, the numerical value recognition unit 121 moves the process to step S310. On the other hand, when the numerical value recognition processing is executed for a predetermined number or more of input images, the numerical value recognition unit 121 specifies the mode value among the digit values specified for the most recent predetermined number of input images ( Step S306). The numerical value recognizing unit 121 refers to the management table and identifies the most frequently stored digit value among the predetermined number of digit values closest to the digit number assigned to the partial area to be processed as the mode value. .
 次に、数値認識部121は、所定数に対する特定した最頻値の(直近の所定数における)発生数の割合を算出する(ステップS307)。 Next, the numerical value recognition unit 121 calculates the ratio of the number of occurrences (in the most recent predetermined number) of the specified mode value to the predetermined number (step S307).
 次に、数値認識部121は、算出した割合が第9閾値を超えるか否かを判定する(ステップS308)。第9閾値は、例えば50%に設定される。 Next, the numerical value recognition unit 121 determines whether or not the calculated ratio exceeds the ninth threshold value (step S308). For example, the ninth threshold is set to 50%.
 算出した割合が第9閾値を超えない場合、数値認識部121は、その最頻値はまだ信頼できる値でないとみなして、ステップS310へ処理を移行する。一方、算出した割合が第9閾値を超える場合、数値認識部121は、特定した最頻値を確定桁値として特定する(ステップS309)。このように、数値認識部121は、算出した割合が第9閾値を超える場合に限り桁値を確定させるため、認識する数値の信頼性をより高めることが可能となる。 When the calculated ratio does not exceed the ninth threshold value, the numerical value recognition unit 121 considers that the mode value is not yet a reliable value, and proceeds to step S310. On the other hand, when the calculated ratio exceeds the ninth threshold value, the numerical value recognition unit 121 identifies the identified mode value as a confirmed digit value (step S309). In this way, the numerical value recognition unit 121 determines the digit value only when the calculated ratio exceeds the ninth threshold value, so that the reliability of the recognized numerical value can be further increased.
 次に、数値認識部121は、検出した全ての部分領域に対して処理が完了したか否かを判定する(ステップS310)。 Next, the numerical value recognition unit 121 determines whether or not the processing has been completed for all the detected partial areas (step S310).
 まだ処理が完了していない部分領域が存在する場合、数値認識部121は、ステップS301へ処理を戻し、ステップS301~S310の処理を繰り返す。一方、検出した全ての部分領域に対して処理が完了した場合、数値認識部121は、全ての桁番号の桁値が確定したか否かを判定する(ステップS311)。 If there is a partial area that has not been processed yet, the numerical value recognition unit 121 returns the process to step S301 and repeats the processes of steps S301 to S310. On the other hand, when the processing is completed for all the detected partial areas, the numerical value recognition unit 121 determines whether or not the digit values of all the digit numbers have been determined (step S311).
 全ての桁番号の桁値がまだ確定していない場合、数値認識部121は、特に処理を実行せず、一連のステップを終了する。一方、全ての桁番号の桁値が確定した場合、数値認識部121は、全ての桁のそれぞれについて特定した確定桁値を組み合わせた数値を、メータ内の数値として認識し(ステップS312)、一連のステップを終了する。 When the digit values of all the digit numbers have not yet been determined, the numerical value recognition unit 121 ends the series of steps without executing any particular process. On the other hand, when the digit values of all the digit numbers are confirmed, the numerical value recognition unit 121 recognizes the numerical value obtained by combining the confirmed digit values specified for each of all the digits as the numerical value in the meter (step S312). This step is finished.
 このように、数値認識部121は、順次生成された各入力画像に写っているメータ内の数値を桁毎に特定して集計し、集計結果に基づいて、メータ内の数値を認識する。数値認識部121は、特定の桁の数値を特定できない入力画像に対しても、他の桁の数値を特定して集計に利用するため、より少ない入力画像を用いて精度良くメータ内の数値を認識することができる。ユーザは、全ての桁を識別可能な入力画像が生成されるまでメータを撮像し続ける必要がなくなるため、画像処理装置100は、ユーザの利便性を向上させることが可能となる。 As described above, the numerical value recognition unit 121 identifies and counts the numerical values in the meter shown in the sequentially generated input images for each digit, and recognizes the numerical values in the meter based on the counting results. The numerical value recognition unit 121 specifies the numerical values of other digits even for an input image in which a numerical value of a specific digit cannot be specified, and uses it for aggregation. Therefore, the numerical value in the meter can be accurately obtained using fewer input images. Can be recognized. Since the user does not need to continue to capture the meter until an input image that can identify all the digits is generated, the image processing apparatus 100 can improve user convenience.
 なお、数値認識部121は、順次生成された各入力画像に写っているメータ内の数値を全桁まとめて特定して集計し、集計結果に基づいて、メータ内の数値を認識してもよい。 Note that the numerical value recognition unit 121 may identify and count all the numerical values in the meter shown in each of the sequentially generated input images, and recognize the numerical values in the meter based on the counting result. .
 また、ステップS305において、数値認識部121は、数値認識処理が実行された入力画像の数が所定数以上でなくても、桁値を確定可能な数以上であれば、ステップS306以降の処理を実行してもよい。例えば、所定数が10であり且つ第9閾値が50%である場合、数値認識処理が実行された入力画像の数が6つの時点で、各入力画像について特定された桁値が全て同一であれば、その桁値は最頻値となり、最頻値の発生数の割合は60%以上となる。そのような場合、数値認識部121は、数値認識処理が実行された入力画像の数が所定数以上でなくても、認識する数値を確定させてもよい。これにより、数値認識部121は、数値認識処理による認識時間を短縮させることが可能となる。 In step S305, the numerical value recognizing unit 121 performs the processes in and after step S306 if the number of input images on which the numerical value recognition processing has been executed is not equal to or greater than a predetermined number, but the number of digits can be determined. May be executed. For example, when the predetermined number is 10 and the ninth threshold value is 50%, the digit values specified for each input image are all the same when the number of input images subjected to the numerical value recognition process is six. For example, the digit value is the mode value, and the ratio of the number of occurrences of the mode value is 60% or more. In such a case, the numerical value recognition unit 121 may determine the numerical value to be recognized even if the number of input images on which numerical value recognition processing has been executed is not equal to or greater than a predetermined number. Thereby, the numerical value recognition part 121 can shorten the recognition time by numerical value recognition processing.
 図8は、証拠画像保存処理の動作の例を示すフローチャートである。図8に示す動作のフローは、図3に示すフローチャートのステップS106において実行される。 FIG. 8 is a flowchart showing an example of operation of evidence image storage processing. The operation flow shown in FIG. 8 is executed in step S106 of the flowchart shown in FIG.
 最初に、判定部122は、数値認識部121が検出した全ての部分領域毎に、各部分領域が鮮明であるか否かを判定する(ステップS401)。なお、判定部122は、数値認識部121が特定した桁値が、数値認識部121が認識した数値において対応する桁値と一致する部分領域についてのみ、各部分領域が鮮明であるか否かを判定してもよい。 First, the determination unit 122 determines whether each partial region is clear for every partial region detected by the numerical value recognition unit 121 (step S401). Note that the determination unit 122 determines whether or not each partial region is clear only for the partial region in which the digit value specified by the numerical value recognition unit 121 matches the corresponding digit value in the numerical value recognized by the numerical value recognition unit 121. You may judge.
 部分領域が鮮明であるとは、部分領域に含まれる数値を認識可能であることを意味し、部分領域にボケ又はテカリが含まれないことを意味する。逆に、部分領域が不鮮明であるとは、部分領域に含まれる数値を認識できないことを意味し、部分領域にボケ又はテカリが含まれることを意味する。判定部122は、各部分領域にボケ又はテカリが含まれるか否かにより、各部分領域が鮮明であるか否かを判定する。 The clear partial area means that the numerical value included in the partial area can be recognized, and that the partial area does not include blur or shine. On the contrary, that the partial area is unclear means that the numerical value included in the partial area cannot be recognized, and means that the partial area includes blur or shine. The determination unit 122 determines whether each partial area is clear depending on whether each partial area includes blur or shine.
 判定部122は、ステップS205の処理と同様にして、メータを撮影し且つボケ又はテカリが含まれない画像を少なくとも用いて事前学習した識別器により、各部分領域にボケ又はテカリが含まれるか否かを判定する。または、判定部122は、ステップS205の処理と同様にして、各部分領域内の画像の輝度値に基づいて、各部分領域にボケ又はテカリが含まれるか否かを判定する。但し、判定部122は、ステップS205の処理よりも、各部分領域にボケ又はテカリが含まれると判定し易くする。即ち、判定部122は、第3閾値、第5閾値、第6閾値又は第8閾値をステップS205の処理で使用される値よりも小さくし、第4閾値をステップS205の処理で使用される値よりも大きくする。 In the same way as the process of step S205, the determination unit 122 determines whether each partial region includes blur or shine by using a discriminator that has captured a meter and previously learned using at least an image that does not include blur or shine. Determine whether. Alternatively, the determination unit 122 determines whether each partial area includes blur or shine, based on the luminance value of the image in each partial area, in the same manner as the process of step S205. However, the determination unit 122 makes it easier to determine that each partial area includes blur or shine than in the process of step S205. That is, the determination unit 122 makes the third threshold value, the fifth threshold value, the sixth threshold value, or the eighth threshold value smaller than the value used in the process of step S205, and the fourth threshold value is a value used in the process of step S205. Larger than.
 なお、判定部122は、ステップS205の処理と同一の判定基準に従って、各部分領域にボケ又はテカリが含まれるか否かを判定してもよい。その場合、判定部122は、ステップS205の処理における判定結果を用いて、各部分領域にボケ又はテカリが含まれるか否かを判定してもよい。これにより、判定部122は、判定処理による処理時間を短縮し、処理負荷を軽減させることができる。 Note that the determination unit 122 may determine whether or not each partial region includes blur or shine according to the same determination criterion as that in step S205. In that case, the determination unit 122 may determine whether each partial region includes blur or shine using the determination result in the process of step S205. Thereby, the determination part 122 can shorten the processing time by determination processing, and can reduce processing load.
 次に、制御部123は、全ての桁に対応する部分領域が鮮明であると判定された1枚の入力画像が存在するか否かを判定する(ステップS402)。 Next, the control unit 123 determines whether or not there is one input image for which it is determined that the partial areas corresponding to all the digits are clear (step S402).
 全ての桁に対応する部分領域が鮮明であると判定された1枚の入力画像が存在する場合、制御部123は、その1枚の入力画像を選択する。制御部123は、選択されたその1枚の入力画像に対応する証拠用画像のみを証拠画像として、数値認識部121が認識した数値を関連付けて記憶装置110に記憶し(ステップS403)、一連のステップを終了する。 When there is one input image in which the partial area corresponding to all the digits is determined to be clear, the control unit 123 selects the one input image. The control unit 123 stores only the evidence image corresponding to the selected one input image as an evidence image, associates the numerical values recognized by the numerical value recognition unit 121 with each other, and stores them in the storage device 110 (step S403). End the step.
 一方、全ての桁に対応する部分領域が鮮明であると判定された入力画像が存在しない場合、制御部123は、対応する証拠用画像が証拠画像として使用される何れの入力画像においても、部分領域が鮮明であると判定されていない桁を特定する(ステップS404)。なお、ステップS404の処理が最初に実行されるときには、部分領域が鮮明であると判定されていない桁として全ての桁が特定される。 On the other hand, when there is no input image in which the partial area corresponding to all the digits is determined to be clear, the control unit 123 determines whether a partial evidence image is used as the evidence image. A digit whose region is not determined to be clear is identified (step S404). When the process of step S404 is first executed, all the digits are specified as digits for which the partial area is not determined to be clear.
 次に、制御部123は、特定した桁について、鮮明であると判定された部分領域が最も多く含まれる入力画像を、対応する証拠用画像を証拠画像として使用する入力画像として選択する(ステップS405)。このように、制御部123は、各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択する。 Next, the control unit 123 selects an input image including the largest number of partial areas determined to be clear for the specified digit as an input image using the corresponding evidence image as the evidence image (step S405). ). Thus, the control unit 123 selects an input image for which each partial area is determined to have a clear partial area corresponding to each digit.
 次に、制御部123は、選択した何れの入力画像においても、部分領域が鮮明であると判定されていない桁が残っているか否かを判定する(ステップS406)。 Next, the control unit 123 determines whether or not there remains a digit for which the partial area is not determined to be clear in any of the selected input images (step S406).
 部分領域が鮮明であると判定されていない桁が残っている場合、制御部123は、ステップS404へ処理を戻し、部分領域が鮮明であると判定されていない桁を再度特定する。一方、部分領域が鮮明であると判定されていない桁が残っていない場合、制御部123は、ステップS405で選択した複数の入力画像のそれぞれに対応する証拠用画像を証拠画像として記憶装置110に記憶する(ステップS407)。そして、制御部123は、一連のステップを終了する。制御部123は、各証拠画像を、数値認識部121が認識した数値を関連付けて記憶装置110に記憶する。 If there are still digits that have not been determined to be clear, the control unit 123 returns the process to step S404 to re-specify digits that have not been determined to be clear. On the other hand, if there are no remaining digits that are not determined to be clear in the partial area, the control unit 123 stores the evidence images corresponding to the plurality of input images selected in step S405 in the storage device 110 as evidence images. Store (step S407). And the control part 123 complete | finishes a series of steps. The control unit 123 stores each evidence image in the storage device 110 in association with the numerical value recognized by the numerical value recognition unit 121.
 図9は、記憶装置110に保存される証拠画像の組合せについて説明する図である。 FIG. 9 is a diagram for explaining a combination of evidence images stored in the storage device 110.
 図9の組合せ1は、入力画像1において全ての桁(桁1~桁4)に対応する部分領域が鮮明であると判定された場合の組合せを示す。この場合、入力画像1の証拠用画像が証拠画像として用いられる。組合せ2は、全ての桁(桁1~桁4)に対応する部分領域が鮮明であると判定された入力画像が存在せず、入力画像1において3桁(桁1~桁3)に対応する部分領域のみが鮮明であると判定された場合の組合せを示す。この場合、入力画像1の証拠用画像と、入力画像1において鮮明であると判定されていない桁4に対応する部分領域が鮮明であると判定された入力画像2の証拠用画像とが、証拠画像として用いられる。 9 shows a combination when it is determined that the partial areas corresponding to all the digits (digits 1 to 4) in the input image 1 are clear. In this case, the evidence image of the input image 1 is used as the evidence image. The combination 2 corresponds to 3 digits (digits 1 to 3) in the input image 1 where there is no input image in which the partial areas corresponding to all the digits (digits 1 to 4) are clear. A combination when it is determined that only the partial area is clear is shown. In this case, the evidence image of the input image 1 and the evidence image of the input image 2 in which the partial area corresponding to the digit 4 that is not determined to be clear in the input image 1 is clear are evidence. Used as an image.
 組合せ3、4は、3桁に対応する部分領域が鮮明であると判定された入力画像が存在せず、入力画像1において2桁(桁1、桁2)に対応する部分領域のみが鮮明であると判定された場合の組合せを示す。組合せ3は、入力画像1において鮮明であると判定されていない桁3、桁4に対応する部分領域が入力画像2において鮮明であると判定された場合の組合せを示す。この場合、入力画像1の証拠用画像と、入力画像2の証拠用画像とが、証拠画像として用いられる。一方、組合せ4は、入力画像1において鮮明であると判定されていない桁3、桁4に対応する部分領域が鮮明であると判定された入力画像が存在しない場合の組合せを示す。この場合、入力画像1の証拠用画像と、桁3に対応する部分領域が鮮明であると判定された入力画像2の証拠用画像と、桁4に対応する部分領域が鮮明であると判定された入力画像3の証拠用画像とが、証拠画像として用いられる。 In the combinations 3 and 4, there is no input image in which the partial area corresponding to 3 digits is determined to be clear, and only the partial area corresponding to 2 digits (digit 1 and digit 2) in the input image 1 is clear. A combination in a case where it is determined that there is one is shown. A combination 3 indicates a combination in a case where the partial areas corresponding to the digits 3 and 4 that are not determined to be clear in the input image 1 are determined to be clear in the input image 2. In this case, the evidence image of the input image 1 and the evidence image of the input image 2 are used as evidence images. On the other hand, the combination 4 indicates a combination in the case where there is no input image in which the partial areas corresponding to the digits 3 and 4 that are not determined to be clear in the input image 1 are determined to be clear. In this case, it is determined that the evidence image of the input image 1, the evidence image of the input image 2 determined to have a clear partial area corresponding to the digit 3, and the partial area corresponding to the digit 4 are clear. The evidence image of the input image 3 is used as the evidence image.
 組合せ5は、2桁に対応する部分領域が鮮明であると判定された入力画像が存在しない場合の組合せを示す。この場合、桁1~4のそれぞれに対応する部分領域が鮮明であると判定された入力画像1~4の証拠用画像が証拠画像として用いられる。 Combination 5 indicates a combination when there is no input image determined that the partial area corresponding to 2 digits is clear. In this case, the evidence images of the input images 1 to 4 that are determined to have clear partial areas corresponding to the digits 1 to 4 are used as evidence images.
 このように、制御部123は、全ての桁に対応する部分領域のそれぞれについて、鮮明であると判定された入力画像の内の何れかの入力画像に対応する証拠用画像を、それぞれ証拠画像として使用する。特に、制御部123は、証拠用画像が証拠画像として使用される入力画像において、鮮明であると判定されなかった部分領域を補完する他の入力画像の証拠用画像を証拠画像として使用する。これにより、制御部123は、全ての部分領域が鮮明であると判定された入力画像が存在しなくても、証拠能力を有する複数の画像を記憶しておくことにより、証拠画像として適切な画像を効率良く記憶することが可能となる。ユーザは、全ての部分領域が鮮明であると判定された入力画像が生成されるまで、メータを撮影し続ける必要がなくなり、画像処理装置100は、ユーザの利便性を向上させることが可能となる。また、画像処理装置100は、数値認識処理に使用した全ての入力画像に関する画像を保存しておく必要がなく、記憶装置110の記憶容量の削減を図ることも可能となる。 As described above, the control unit 123 uses, as evidence images, the evidence images corresponding to any of the input images determined to be clear for each of the partial regions corresponding to all the digits. use. In particular, the control unit 123 uses, as an evidence image, an evidence image of another input image that complements a partial region that has not been determined to be clear in an input image in which the evidence image is used as an evidence image. As a result, the control unit 123 stores a plurality of images having evidence ability even when there is no input image in which all the partial areas are determined to be clear, thereby enabling an appropriate image as an evidence image. Can be stored efficiently. The user does not need to continue photographing the meter until an input image determined that all the partial areas are clear is generated, and the image processing apparatus 100 can improve user convenience. . In addition, the image processing apparatus 100 does not need to store images related to all input images used in the numerical value recognition process, and can reduce the storage capacity of the storage device 110.
 なお、部分領域が数値認識処理で使用可能であるか否の判定基準と、鮮明であるか否かの判定基準が異なる場合、数値認識部121がメータ内の数値を認識した後も、全ての桁に対応する各部分領域が鮮明である証拠画像が揃わない可能性がある。その場合、制御部123は、数値認識部121がメータ内の数値を認識した後も、全ての桁に対応する各部分領域が鮮明である証拠画像が揃うまで、撮像装置104にメータを撮像させて入力画像を生成させ続けてもよい。数値認識部121は、メータ内の数値を認識済みであるため、メータ内の数値の集計を停止するが、制御部123は、新たに生成された入力画像について各部分領域が鮮明であるか否かの判定を継続する。制御部123は、全ての桁に対応する各部分領域が鮮明である証拠画像が揃った場合に、撮像装置104にメータの撮像を停止させて、証拠画像保存処理を終了する。これにより、画像処理装置100は、より確実に目視により識別可能な証拠画像を取得することが可能となる。 In addition, when the criterion for determining whether or not the partial area can be used in the numerical value recognition process is different from the criterion for determining whether the partial region is clear, even after the numerical value recognition unit 121 recognizes the numerical value in the meter, There is a possibility that evidence images in which the partial areas corresponding to the digits are clear are not prepared. In that case, after the numerical value recognition unit 121 recognizes the numerical value in the meter, the control unit 123 causes the imaging device 104 to image the meter until evidence images in which the partial areas corresponding to all the digits are clear are prepared. The input image may be continuously generated. Since the numerical value recognition unit 121 has already recognized the numerical values in the meter, the numerical value recognition unit 121 stops counting the numerical values in the meter. However, the control unit 123 determines whether each partial area is clear for the newly generated input image. Continue the determination. When the evidence images in which the partial areas corresponding to all the digits are clear are prepared, the control unit 123 causes the imaging device 104 to stop the imaging of the meter and ends the evidence image storage process. As a result, the image processing apparatus 100 can acquire an evidence image that can be visually identified more reliably.
 また、制御部123は、数値認識部121がメータ内の数値を認識してから所定時間以内に、全ての桁に対応する各部分領域が鮮明である証拠画像が揃わなかった場合、ユーザによるメータの撮影の停止を指示する撮影停止指示を受け付けてもよい。その場合、制御部123は、ユーザからの撮影停止指示を受け付けるためのボタン等の表示データを表示装置103に表示する。制御部123は、ユーザにより、入力装置102を用いてそのボタンがタップされて撮影停止指示が入力され、入力装置102から撮影停止指示信号を受信すると、撮影停止指示を受け付ける。 In addition, the control unit 123, when the numerical image recognition unit 121 recognizes the numerical value in the meter, when the evidence image in which the partial areas corresponding to all the digits are clear is not prepared within a predetermined time, the meter by the user A shooting stop instruction for instructing to stop shooting may be received. In that case, the control unit 123 displays display data such as a button for accepting a photographing stop instruction from the user on the display device 103. When the user taps the button using the input device 102 to input a shooting stop instruction and receives a shooting stop instruction signal from the input device 102, the control unit 123 receives the shooting stop instruction.
 制御部123は、撮影停止指示を受け付けた場合、各桁毎に、各桁に対応する部分領域の中から、最も鮮明であると判定された部分領域を選択し、選択した各部分領域を含む各入力画像の証拠用画像を証拠画像として使用する。制御部123は、例えば、各識別器が出力するボケ度が最も小さい部分領域、エッジ強度の平均値が最も大きい部分領域、又は、半値幅の平均値が最も小さい部分領域を、最も鮮明である部分領域として選択する。または、制御部123は、各識別器が出力するテカリ度が最も小さい部分領域、又は、輝度値が第7閾値以上である画素の数もしくはヒストグラムの第7閾値以上の領域に分布された画素の数が最も少ない部分領域を、最も鮮明である部分領域として選択する。ユーザは、所定時間以内に鮮明な証拠画像が取得されなかった場合に、それまでに取得された画像の中で最も鮮明な画像を確認することができ、その画像をそのまま証拠画像として使用するかメータを撮影しなおすかを判断することが可能となる。 When receiving a shooting stop instruction, the control unit 123 selects, for each digit, a partial region determined to be the clearest from the partial regions corresponding to each digit, and includes each selected partial region. The evidence image of each input image is used as the evidence image. The control unit 123, for example, has the clearest partial area with the smallest degree of blur output from each classifier, the partial area with the largest average value of edge strength, or the partial area with the smallest average value of half width. Select as a partial area. Alternatively, the control unit 123 outputs pixels that are distributed in the partial region with the smallest degree of shine output from each classifier, the number of pixels having a luminance value equal to or higher than the seventh threshold value, or the region equal to or higher than the seventh threshold value in the histogram. The partial area having the smallest number is selected as the partial area that is the clearest. When a clear evidence image is not acquired within a predetermined time, the user can confirm the clearest image among the images acquired so far, and use the image as the evidence image as it is. It is possible to determine whether to re-shoot the meter.
 以上詳述したように、画像処理装置100は、メータ内の数値の各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像に対応する証拠用画像を証拠画像として記憶する。これにより、画像処理装置100は、証拠画像として適切な画像を効率良く記憶することが可能となった。 As described above in detail, the image processing apparatus 100 uses, for each digit of the numerical value in the meter, an evidence image corresponding to the input image determined to have a clear partial area corresponding to each digit as an evidence image. Remember. As a result, the image processing apparatus 100 can efficiently store an appropriate image as an evidence image.
 図10は、他の実施形態に係る画像処理装置における処理回路230の概略構成を示すブロック図である。 FIG. 10 is a block diagram showing a schematic configuration of the processing circuit 230 in the image processing apparatus according to another embodiment.
 処理回路230は、画像処理装置100の処理回路130の代わりに用いられ、CPU120の代わりに、全体処理を実行する。処理回路230は、数値認識回路231、判定回路232及び制御回路233等を有する。 The processing circuit 230 is used instead of the processing circuit 130 of the image processing apparatus 100, and executes the entire processing instead of the CPU 120. The processing circuit 230 includes a numerical value recognition circuit 231, a determination circuit 232, a control circuit 233, and the like.
 数値認識回路231は、数値認識部の一例であり、数値認識部121と同様の機能を有する。数値認識回路231は、撮像装置104からメータを撮影した入力画像を順次取得し、記憶装置110に順次記憶する。また、数値認識回路231は、各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいてメータ内の数値を認識し、認識結果を記憶装置110に記憶する。 The numerical value recognition circuit 231 is an example of a numerical value recognition unit, and has the same function as the numerical value recognition unit 121. The numerical value recognition circuit 231 sequentially acquires input images obtained by photographing the meter from the imaging device 104 and sequentially stores them in the storage device 110. In addition, the numerical value recognition circuit 231 identifies and counts the numerical values in the meter shown in each input image, recognizes the numerical values in the meter based on the totaled results, and stores the recognition results in the storage device 110.
 判定回路232は、判定部の一例であり、判定部122と同様の機能を有する。判定回路232は、メータ内の数値の各桁に対応する部分領域毎に、各部分領域が鮮明であるか否かを判定し、判定結果を制御回路233に出力する。 The determination circuit 232 is an example of a determination unit and has the same function as the determination unit 122. The determination circuit 232 determines whether or not each partial area is clear for each partial area corresponding to each digit of the numerical value in the meter, and outputs the determination result to the control circuit 233.
 制御回路233は、制御部の一例であり、制御部123と同様の機能を有する。制御回路233は、各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像に対応する証拠用画像を証拠画像として数値認識回路231が認識した数値を関連付けて記憶装置110に記憶する。 The control circuit 233 is an example of a control unit and has the same function as the control unit 123. For each digit, the control circuit 233 selects an input image for which the partial area corresponding to each digit is determined to be clear, and uses the evidence image corresponding to the selected input image as the evidence image for the numerical value recognition circuit 231. Are stored in the storage device 110 in association with each other.
 以上詳述したように、画像処理装置100は、処理回路230を用いる場合においても、証拠画像として適切な画像を効率良く記憶することが可能となった。 As described in detail above, the image processing apparatus 100 can efficiently store an appropriate image as an evidence image even when the processing circuit 230 is used.
 以上、本発明の好適な実施形態について説明してきたが、本発明はこれらの実施形態に限定されるものではない。例えば、部分領域検出処理、数値認識処理又は証拠画像保存処理で使用される各識別器は、記憶装置110に記憶されているのではなく、サーバ装置等の外部装置に記憶されていてもよい。その場合、数値認識部121は、通信装置101を介してサーバ装置に、各画像を送信し、サーバ装置から各識別器が出力する識別結果を受信して取得する。 The preferred embodiments of the present invention have been described above, but the present invention is not limited to these embodiments. For example, each discriminator used in the partial area detection process, the numerical value recognition process, or the evidence image storage process may be stored in an external device such as a server device instead of being stored in the storage device 110. In that case, the numerical value recognition unit 121 transmits each image to the server apparatus via the communication apparatus 101, and receives and acquires the identification result output from each classifier from the server apparatus.
 また、画像処理装置100は、携帯可能な情報処理装置に限定されず、例えば、メータを撮像可能に設置された定点カメラ等でもよい。 Further, the image processing apparatus 100 is not limited to a portable information processing apparatus, and may be, for example, a fixed point camera or the like installed so that a meter can be imaged.
 100  画像処理装置
 104  撮像装置
 110  記憶装置
 121  数値認識部
 122  判定部
 123  制御部
DESCRIPTION OF SYMBOLS 100 Image processing apparatus 104 Imaging apparatus 110 Storage apparatus 121 Numerical value recognition part 122 Judgment part 123 Control part

Claims (7)

  1.  記憶部と、
     メータを撮影した入力画像を順次生成する撮像部と、
     前記順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、前記メータ内の数値を認識する数値認識部と、
     前記メータ内の数値の各桁に対応する部分領域毎に、各部分領域が鮮明であるか否かを判定する判定部と、
     前記各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像の少なくとも一部を証拠画像として前記数値認識部が認識した数値を関連付けて前記記憶部に記憶する制御部と、
     を有することを特徴とする画像処理装置。
    A storage unit;
    An imaging unit that sequentially generates input images obtained by imaging the meter;
    A numerical value recognizing unit that identifies and counts the numerical values in the meter shown in each of the sequentially generated input images, and recognizes the numerical values in the meter based on the totaling results;
    A determination unit that determines whether each partial area is clear for each partial area corresponding to each digit of the numerical value in the meter;
    For each digit, select an input image in which the partial area corresponding to each digit is determined to be clear, and associate the numerical value recognized by the numerical value recognition unit with at least a part of the selected input image as an evidence image A control unit for storing in the storage unit
    An image processing apparatus comprising:
  2.  前記制御部は、全ての部分領域が鮮明であると判定された1枚の入力画像が存在する場合には、当該1枚の入力画像のみの少なくとも一部を証拠画像として前記記憶部に記憶し、全ての部分領域が鮮明であると判定された入力画像が存在しない場合には、複数の入力画像のそれぞれの少なくとも一部を証拠画像として前記記憶部に記憶する、請求項1に記載の画像処理装置。 When there is one input image in which all the partial areas are determined to be clear, the control unit stores at least a part of only the one input image as an evidence image in the storage unit. 2. The image according to claim 1, wherein when there is no input image determined to be clear in all partial areas, at least a part of each of the plurality of input images is stored in the storage unit as an evidence image. Processing equipment.
  3.  前記判定部は、メータを撮影し且つ鮮明である画像を少なくとも用いて事前学習した識別器により、各部分領域が鮮明であるか否かを判定する、請求項1または2に記載の画像処理装置。 3. The image processing apparatus according to claim 1, wherein the determination unit determines whether each partial region is clear or not by a discriminator preliminarily learned using at least a clear image taken by the meter. .
  4.  前記判定部は、各部分領域内の画像の輝度値に基づいて、各部分領域が鮮明であるか否かを判定する、請求項1または2に記載の画像処理装置。 3. The image processing apparatus according to claim 1, wherein the determination unit determines whether each partial area is clear based on a luminance value of an image in each partial area.
  5.  前記数値認識部は、
      前記順次生成された各入力画像に写っているメータ内の数値の各桁毎に、桁値を特定し、所定数の入力画像に対して特定した桁値の中の最頻値を特定し、前記所定数に対する前記特定した最頻値の発生数の割合が閾値を超える場合、前記特定した最頻値を確定桁値として特定し、
      全ての桁のそれぞれについて特定した確定桁値を組み合わせた数値を、前記メータ内の数値として認識する、請求項1~4の何れか一項に記載の画像処理装置。
    The numerical value recognition unit
    For each digit of the numerical value in the meter shown in each sequentially generated input image, specify a digit value, specify the mode value among the specified digit values for a predetermined number of input images, If the ratio of the number of occurrences of the specified mode value to the predetermined number exceeds a threshold value, the specified mode value is specified as a fixed digit value,
    The image processing apparatus according to any one of claims 1 to 4, wherein a numerical value obtained by combining fixed digit values specified for all of the digits is recognized as a numerical value in the meter.
  6.  記憶部と、メータを撮影した入力画像を順次生成する撮像部と、を有する画像処理装置の制御方法であって、
     前記順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、前記メータ内の数値を認識し、
     前記メータ内の数値の各桁に対応する部分領域毎に、各部分領域が鮮明であるか否かを判定し、
     前記各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像の少なくとも一部を証拠画像として前記認識した数値を関連付けて前記記憶部に記憶する、
     ことを含むことを特徴とする制御方法。
    A control method for an image processing apparatus, comprising: a storage unit; and an imaging unit that sequentially generates input images obtained by capturing a meter,
    Identify and count the numerical values in the meter that appear in each of the sequentially generated input images, and based on the totaling results, recognize the numerical values in the meter,
    For each partial area corresponding to each digit of the numerical value in the meter, determine whether each partial area is clear,
    For each digit, select an input image in which a partial area corresponding to each digit is determined to be clear, associate at least a part of the selected input image as an evidence image with the recognized numerical value, and store the storage unit Remember
    A control method comprising:
  7.  記憶部と、メータを撮影した入力画像を順次生成する撮像部と、を有する画像処理装置の制御プログラムであって、
     前記順次生成された各入力画像に写っているメータ内の数値を特定して集計し、集計結果に基づいて、前記メータ内の数値を認識し、
     前記メータ内の数値の各桁に対応する部分領域毎に、各部分領域が鮮明であるか否かを判定し、
     前記各桁毎に、各桁に対応する部分領域が鮮明であると判定された入力画像を選択し、選択された入力画像の少なくとも一部を証拠画像として前記認識した数値を関連付けて前記記憶部に記憶する、
     ことを前記画像処理装置に実行させることを特徴とする制御プログラム。
    A control program for an image processing apparatus, comprising: a storage unit; and an imaging unit that sequentially generates input images obtained by photographing a meter,
    Identify and count the numerical values in the meter that appear in each of the sequentially generated input images, and based on the totaling results, recognize the numerical values in the meter,
    For each partial area corresponding to each digit of the numerical value in the meter, determine whether each partial area is clear,
    For each digit, select an input image in which a partial area corresponding to each digit is determined to be clear, associate at least a part of the selected input image as an evidence image with the recognized numerical value, and store the storage unit Remember
    A control program that causes the image processing apparatus to execute
PCT/JP2017/011039 2017-03-17 2017-03-17 Image processing device, control method, and control program WO2018167974A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2019505670A JP6707178B2 (en) 2017-03-17 2017-03-17 Image processing device, control method, and control program
PCT/JP2017/011039 WO2018167974A1 (en) 2017-03-17 2017-03-17 Image processing device, control method, and control program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/011039 WO2018167974A1 (en) 2017-03-17 2017-03-17 Image processing device, control method, and control program

Publications (1)

Publication Number Publication Date
WO2018167974A1 true WO2018167974A1 (en) 2018-09-20

Family

ID=63523497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/011039 WO2018167974A1 (en) 2017-03-17 2017-03-17 Image processing device, control method, and control program

Country Status (2)

Country Link
JP (1) JP6707178B2 (en)
WO (1) WO2018167974A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020052981A (en) * 2018-09-28 2020-04-02 株式会社東芝 Information processing device, learning device, information processing system, information processing method, and computer program
JP2020087397A (en) * 2018-11-19 2020-06-04 奇邑科技股▲ふん▼有限公司 Method, device and system for reading intelligence information
JP2021012665A (en) * 2019-07-09 2021-02-04 三菱重工業株式会社 Indicated value reading system and method and program
WO2022014145A1 (en) * 2020-07-14 2022-01-20 ダイキン工業株式会社 Image processing device, air processing system, image processing program, and image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01177178A (en) * 1988-01-04 1989-07-13 Sumitomo Electric Ind Ltd Character recognizing device
JP2007011654A (en) * 2005-06-30 2007-01-18 Seiko Epson Corp Method of reading embossed character
WO2015015535A1 (en) * 2013-07-31 2015-02-05 グローリー株式会社 Bill handling system, bill handing apparatus, and bill handling method
JP5879455B1 (en) * 2015-10-16 2016-03-08 株式会社ネフロンジャパン Water meter reading device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01177178A (en) * 1988-01-04 1989-07-13 Sumitomo Electric Ind Ltd Character recognizing device
JP2007011654A (en) * 2005-06-30 2007-01-18 Seiko Epson Corp Method of reading embossed character
WO2015015535A1 (en) * 2013-07-31 2015-02-05 グローリー株式会社 Bill handling system, bill handing apparatus, and bill handling method
JP5879455B1 (en) * 2015-10-16 2016-03-08 株式会社ネフロンジャパン Water meter reading device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020052981A (en) * 2018-09-28 2020-04-02 株式会社東芝 Information processing device, learning device, information processing system, information processing method, and computer program
JP2020087397A (en) * 2018-11-19 2020-06-04 奇邑科技股▲ふん▼有限公司 Method, device and system for reading intelligence information
JP2021012665A (en) * 2019-07-09 2021-02-04 三菱重工業株式会社 Indicated value reading system and method and program
JP7187394B2 (en) 2019-07-09 2022-12-12 三菱重工業株式会社 Indicated value reading system, method and program
WO2022014145A1 (en) * 2020-07-14 2022-01-20 ダイキン工業株式会社 Image processing device, air processing system, image processing program, and image processing method
JP2022018021A (en) * 2020-07-14 2022-01-26 ダイキン工業株式会社 Image processing device, air treatment system, image processing program and image processing method
JP7014982B2 (en) 2020-07-14 2022-02-15 ダイキン工業株式会社 Image processing equipment, air processing system, image processing program, and image processing method
CN116157832A (en) * 2020-07-14 2023-05-23 大金工业株式会社 Image processing device, air processing system, image processing program, and image processing method
US11810328B2 (en) 2020-07-14 2023-11-07 Daikin Industries, Ltd. Image processing device, air treatment system, image processing program, and image processing method

Also Published As

Publication number Publication date
JP6707178B2 (en) 2020-06-10
JPWO2018167974A1 (en) 2019-07-25

Similar Documents

Publication Publication Date Title
WO2018167974A1 (en) Image processing device, control method, and control program
US9407884B2 (en) Image pickup apparatus, control method therefore and storage medium employing phase difference pixels
US20130292473A1 (en) Method and arrangement for retrieving information comprised in a barcode
US20150009226A1 (en) Color chart detection apparatus, color chart detection method, and color chart detection computer program
US7349588B2 (en) Automatic meter reading
JP2014195148A (en) Image processing apparatus, region determination method, and computer program
US20160321825A1 (en) Measuring apparatus, system, and program
US8718370B2 (en) Optical information-reading apparatus and optical information-reading method
JP2016173760A (en) Numerical value recognition apparatus, numerical value recognition method, and numerical value recognition program
CN106056028A (en) Rapid scanning and waybill recording method and device thereof
JP6789410B2 (en) Image processing device, control method and control program
JP2008067321A (en) Data registration management apparatus
JP2011134117A (en) Object region extraction apparatus, method for controlling the same, object tracking apparatus, and program
WO2018167971A1 (en) Image processing device, control method, and control program
US7835552B2 (en) Image capturing apparatus and face area extraction method
JP2008077430A (en) Mobile body counting device and mobile body counting method
JP2014191685A (en) Image processing apparatus and image processing method
JP2009017158A (en) Camera inspection device
JP4312185B2 (en) Game mat, card game system, image analysis apparatus, and image analysis method
JP4364186B2 (en) card
CN110786009B (en) Method, device and machine-readable storage medium for detecting Bell image
KR20150009842A (en) System for testing camera module centering and method for testing camera module centering using the same
JP6851337B2 (en) Imaging device, control method and control program
JP2012133587A (en) Image analysis device, image analysis method and program
WO2019234865A1 (en) Inspection device, control method, and control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17901047

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019505670

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17901047

Country of ref document: EP

Kind code of ref document: A1