WO2023204265A1 - Signal processing system and signal processing method - Google Patents

Signal processing system and signal processing method Download PDF

Info

Publication number
WO2023204265A1
WO2023204265A1 PCT/JP2023/015725 JP2023015725W WO2023204265A1 WO 2023204265 A1 WO2023204265 A1 WO 2023204265A1 JP 2023015725 W JP2023015725 W JP 2023015725W WO 2023204265 A1 WO2023204265 A1 WO 2023204265A1
Authority
WO
WIPO (PCT)
Prior art keywords
captured image
signal processing
processing system
analysis
video signal
Prior art date
Application number
PCT/JP2023/015725
Other languages
French (fr)
Japanese (ja)
Inventor
宣彰 倉林
Original Assignee
京セラ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京セラ株式会社 filed Critical 京セラ株式会社
Publication of WO2023204265A1 publication Critical patent/WO2023204265A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/12Detection or correction of errors, e.g. by rescanning the pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method

Definitions

  • the present disclosure relates to a signal processing system and the like that generate captured images to be analyzed.
  • Patent Document 1 discloses a technique for performing OCR processing on an image taken with a camera and converting it into character data.
  • a signal processing system includes: a distributor that distributes a video signal to be displayed on a display device; a capture unit that generates a capture image that can be analyzed from the distributed video signal; and an output unit that outputs the captured image to an analysis device that analyzes the captured image.
  • a signal processing method is a signal processing method for generating an image to be analyzed, and includes a distributor that distributes a video signal to be displayed on a display device, and the distributor The method includes a distribution step of distributing a signal, a capture step of generating an analyzable captured image from the distributed video signal, and an output step of outputting the captured image to an analysis device that analyzes the captured image.
  • FIG. 1 is a schematic diagram showing an overall outline of a signal processing system according to an embodiment of the present disclosure.
  • FIG. 2 is a functional block diagram showing the configuration of main parts of the analysis device.
  • FIG. 7 is a diagram illustrating an example of analyzing only a part of a captured image.
  • FIG. 6 is a diagram illustrating an example of determining whether or not to analyze a captured image.
  • FIG. 6 is a diagram illustrating an example of determining whether or not to analyze a captured image.
  • FIG. 6 is a diagram illustrating an example of presentation of analysis results of an analysis unit.
  • FIG. 2 is a functional block diagram showing the configuration of main parts of the analysis device.
  • FIG. 6 is a diagram illustrating an example of modifying analysis results.
  • FIG. 2 is a schematic diagram showing an overall outline of another signal processing system.
  • FIG. 3 is a schematic diagram showing an overall outline of yet another signal processing system.
  • FIG. 3 is a schematic diagram showing an overall outline of a modification of the signal processing
  • Patent Document 1 Therefore, there is room for improvement in the technology described in Patent Document 1 in terms of easily and accurately acquiring information.
  • the signal processing system and signal processing method distributes a video signal and performs analysis on a captured image obtained by capturing the distributed video signal. do not have. Further, the signal processing system and signal processing method according to one aspect of the present disclosure analyze a captured image obtained by capturing a video signal as it is, so that information included in the image can be accurately acquired. Therefore, the signal processing system and signal processing method according to one aspect of the present disclosure can easily and accurately perform analysis and obtain accurate information, compared to the case of analyzing captured images captured with a camera. This effect is achieved.
  • FIG. 1 is a schematic diagram for explaining a signal processing system 1.
  • the signal processing system 1 includes a distributor 11, a capture section 12, and an output section 13.
  • the signal processing system 1 may include the analysis device 30.
  • the distributor 11 distributes the video signal transmitted from the device control device 10 to the display device 20 (distribution step).
  • the distributed video signal is transmitted to the display device 20 and the capture unit 12.
  • the equipment control device 10 is, for example, a control device that controls production equipment.
  • the device control device 10 for example, generates a screen that presents acquired data acquired from a controlled object to a user, and transmits a video signal to the display device 20 for displaying the screen on the display device 20 . Thereby, the screen is displayed on the display device 20.
  • the display device 20 is a so-called display, and is capable of displaying various information.
  • the display device 20 may be attached integrally with the device control device 10, or may be connected to the device control device 10 by wire.
  • the capture unit 12 generates a captured image from the video signal distributed by the distributor 11 (capture step).
  • the generated captured image is sent to the output unit 13.
  • the capture unit 12 captures the input video signal at a predetermined frequency and generates a captured image.
  • the predetermined frequency may be lower than the refresh rate of display device 20. In other words, the number of captures per minute by the capture unit 12 may be smaller than the refresh rate of the display device 20. This is because in the display device 20, there is no possibility that images will be changed more frequently than the refresh rate, and there is no point in capturing images more frequently than the refresh rate. In other words, waste can be eliminated by making the predetermined frequency smaller than the refresh rate of the display device 20.
  • the predetermined frequency may be a predetermined frequency.
  • the interval at which the capture unit 12 performs capture may be adjustable. By adjusting the capture interval, the capture unit 12 can generate an appropriate amount of captured images and send them to the output unit 13. Thereby, in the signal processing system 1, the possibility that the capture unit 12 and the analysis device 30 will be in an overload state can be reduced.
  • the captured image generated by the capture unit 12 may be recorded in a predetermined storage unit (not shown).
  • the output unit 13 outputs the captured image generated by the capture unit 12 to the analysis device 30 (output step).
  • the analysis device 30 analyzes the captured image generated by the capture unit 12. Details of the analysis device 30 will be described with reference to FIG. 2.
  • FIG. 2 is a functional block diagram showing the main part configuration of the analysis device 30.
  • the analysis device 30 includes an acquisition section 31, an analysis section 32, and a presentation section 34. Furthermore, the analysis section 32 may include an image determination section 33.
  • the acquisition unit 31 acquires the captured image from the output unit 13 and transmits it to the analysis unit 32.
  • the analysis unit 32 analyzes the captured image acquired via the acquisition unit 31.
  • An example of the analysis performed by the analysis unit 32 is, for example, OCR (Optical Character Recognition/Reader) processing. OCR processing makes it possible to obtain character data indicating characters included in the captured image from the captured image.
  • OCR processing makes it possible to obtain character data indicating characters included in the captured image from the captured image.
  • the analysis performed by the analysis unit 32 is not limited to OCR, and may be an analysis for acquiring a pattern included in a captured image as data.
  • the analysis unit 32 may perform analysis using a pattern such as an icon included in the captured image. Specifically, if the captured image includes a pattern such as an icon indicating the status of the equipment to be monitored (equipment control device 10), the analysis unit 32 may acquire the pattern as data to be analyzed. good. In this case, the analysis unit 32 may convert the state of the equipment to be monitored into data by performing pattern matching using the acquired pattern data as an analysis method.
  • the analysis unit 32 may be capable of performing multiple analyzes using different processing methods in parallel, or may be capable of performing multiple analyzes using different processing algorithms in parallel even if the processing method is the same. It may be possible to do so. Examples of analyzes using different processing algorithms include analyzes using different parameter values.
  • the analysis unit 32 may analyze only a part of the captured image.
  • An example in which only a part of one captured image is analyzed will be described with reference to FIG. 3.
  • 301 in FIG. 3 indicates a captured image.
  • the analysis unit 32 analyzes only the region 311 of the captured image 301, that is, only data A and data B, for example.
  • the area to be analyzed may be determined in advance. By analyzing only a portion of the captured image, the load on the analysis can be reduced compared to the case where the entire captured image is analyzed.
  • the image determination unit 33 determines whether the captured image acquired via the acquisition unit 31 is a captured image to be analyzed.
  • the analysis unit 32 may analyze only the captured image that the image determination unit 33 determines to be an analysis target.
  • the image determination unit 33 may determine captured images that meet predetermined conditions to be analyzed. By only analyzing captured images that meet predetermined conditions, it is possible to reduce the load on analysis and reduce the reading of erroneous data from unintended screens compared to the case where all captured images are analyzed.
  • the predetermined condition may be, for example, that there is a predetermined mark at a predetermined position, or that the overall configuration of the captured image matches the predetermined configuration. Alternatively, it may be a captured image generated at a predetermined time.
  • the image determination unit 33 may determine, among captured images generated at a predetermined frequency, a captured image that is different from the previous captured image to be an analysis target. For example, assume that there is a captured image 401 as shown in FIG. 4 and a captured image 402 generated at the next generation timing. Since the captured image 401 and the captured image 402 have the same display content except for the time, the image determination unit 33 determines that the captured image 402 is not an analysis target.
  • Captured image 501 and captured image 502 cannot be said to be the same image. Specifically, the captured image 501 and the captured image 502 differ in image shape, aspect ratio, position of the area showing characters, and the like. Therefore, the image determination unit 33 determines that the captured image 502 is an analysis target. This reduces the time required for the analysis unit 32 to register and analyze captured images that do not need to be converted into data as analysis targets. Furthermore, the possibility that erroneously read data will occur due to analysis of unnecessary captured images is reduced.
  • the conditions for determining that the captured image 501 and the captured image 502 cannot be said to be the same image may be conditions other than those described above. For example, when two captured images are superimposed, if there is a difference in at least part of the area other than the area indicating the time, it is determined that these captured images cannot be said to be the same image. It's fine.
  • the presentation unit 34 presents the analysis results in the analysis unit 32.
  • FIG. 6 shows an example of the analysis results of the analysis unit 32 presented by the presentation unit 34.
  • the result of the analysis of the captured image is presented as a monitoring screen 601, and the time 611 is presented in the upper right corner, and data A to data The alphanumeric character E is presented.
  • FIG. 7 shows an analysis device 30' which is another form of the analysis device 30.
  • a comparison section 35 and a correction section 36 are added to the analysis device 30 shown in FIG.
  • the comparison unit 35 compares the results of multiple analyzes when the analysis unit 32 can execute multiple analyzes using different processing methods in parallel.
  • the comparison results are then presented by the presentation unit 34.
  • the user can verify the accuracy of the analysis by checking the comparison results.
  • the analysis unit 32 may be able to execute multiple analyzes using the same processing method in parallel.
  • the analysis unit 32 may perform analysis using a plurality of different software that employs OCR processing as a processing method.
  • the comparing unit 35 may compare the results of multiple analyzes performed by the analyzing unit 32 using multiple different software.
  • the correction unit 36 adjusts the order of the types of character data obtained as a result of OCR processing to the order of the types of character data obtained as a result of other OCR processing for the same item.
  • the character data that is the result of OCR processing is corrected.
  • the type of character data refers to numbers, alphabetic characters, etc.
  • the order of the types of character data is different between data in which character data is arranged in the order of numbers, numbers, and alphabets, and data in which the characters are arranged in the order of numbers, alphabets, and numbers.
  • the character data that is the result of OCR processing is composed of alphabets and numbers
  • the character data of the same item are often arranged in the same type. Therefore, if the order of the types of character data is different from other OCR results, there is a high possibility that the OCR processing is incorrect. Therefore, the modification unit 36 modifies the character data when there is a high possibility that such OCR processing is erroneous.
  • FIG 8. An example of modification is shown in Figure 8.
  • the result of analysis by the analysis unit 32 at a certain timing of data C is “98ST7” as shown in 801, and the result of analysis by the analysis unit 32 at the next timing is “77ST5” as shown in 802.
  • the result of analysis by the analysis unit 32 at the next timing is "668T5" as shown in 803.
  • the data C is thought to be represented by "numbers, numbers, alphabets, alphabets, numbers", but in 803 it is represented by "numbers, numbers, numbers, alphabets, numbers”.
  • the correction unit 36 determines that the third number of 803 is likely to be an incorrect alphabetic character, and based on the results of 801 and 802, corrects the third number to the alphabetic character "S". do.
  • the modifying unit 36 associates the modified character data with the modified character data and stores it in a storage unit (not shown) as a log of the modified parts. Good too.
  • the log may include information such as a captured image corresponding to the modified character data and the date and time when the modification was performed. By checking the log, the user can recognize the character identification rate by OCR processing.
  • the signal processing system 1 includes the distributor 11 that distributes a video signal to be displayed on the display device 20, and the capture unit 12 that generates an analyzable capture image from the distributed video signal. , and an output unit 13 that outputs the captured image to an analysis device 30 that analyzes the captured image.
  • the video signal is distributed and a captured image is generated from the distributed video signal and output, so an image more suitable for analysis than an image taken with a camera can be output to the analysis device 30.
  • the analysis device 30 can analyze the captured image generated from the video signal, it is possible to accurately acquire the information included in the image. Therefore, information can be easily and accurately acquired.
  • FIG. 9 is a schematic diagram showing an overall outline of the signal processing system 1A.
  • a video signal (first video signal) output from the device control device 10 and displayed on the display device 20 is distributed and captured by a distributor 11 (first distributor).
  • a video signal (second video signal) that is input to the unit 12, output from the equipment control device 10A different from the equipment control device 10, and displayed on the display device 20A is distributed by the distributor 11A (second distributor). and is input to the capture unit 12. That is, two video signals, the first video signal output from the device control device 10 and the second video signal output from the device control device 10A, are input to the capture unit 12.
  • the signal processing system 1A includes two distributors 11 (first distributor, second distributor), and the capture unit 12 receives video signals (first distributor) from two device control devices 10 (10A).
  • video signal, second video signal is input.
  • the video signal from the device control device 10 and the video signal from the device control device 10A may be switched and input to the capture unit 12 one by one.
  • the capture unit 12 may generate capture images from both the first video signal distributed by the distributor 11 (first distributor) and the video signal distributed by the distributor 11A (second distributor). good.
  • the capture unit 12 may compare the input first video signal and second video signal. As an example, the capture unit 12 generates two captured images using a first video signal and a second video signal, and displays a common part between these two captured images, for example, a common part between the two captured images. You may also compare the icons etc. By comparing the two captured images, the capture unit 12 determines whether one of the captured images is degraded, that is, whether one of the acquired first video signal and second video signal is degraded. It is possible to verify whether If either the first video signal or the second video signal is degraded, the capture unit 12 may re-acquire the degraded video signal. By the capture unit 12 performing the above-described processing, the signal processing system 1A can reduce the possibility that degraded video signals will be used, and can further improve the accuracy of reading captured images.
  • the number of video signals input to the capture unit 12 is not limited to two, and may be three or more.
  • the capture unit 12 generates a captured image using the same method ( (created). If the captured images generated by the capture unit 12 are generated using the same method, the analysis device 30 that analyzes the captured images only needs to analyze the captured images generated using the same method, so that processing can be facilitated. .
  • FIG. 10 is a schematic diagram showing an overall outline of the signal processing system 1B.
  • the video signal output from the device control device 10 and displayed on the display device 20 is distributed by the distributor 11 and input to the capture unit 12.
  • the captured image generated by the capture unit 12 is output to the analysis device 30 via the output unit 13 and analyzed by the analysis device 30.
  • a video signal output from the device control device 10B and displayed on the display device 20B is distributed by the distributor 11B and input to the capture unit 12B.
  • the captured image generated by the capture unit 12B is output to the analysis device 30 via the output unit 13B and analyzed by the analysis device 30.
  • the signal processing system 1B includes a distributor 11 (11B), a capture unit 12 (12B), and an output unit 13 (13B) for each device control device 10 (10B).
  • one analysis device 30 can analyze the video signal.
  • FIG. 11 is a schematic diagram showing an overall outline of a further modified example of the signal processing system 1.
  • the captured image output from the signal processing system 1 is transmitted as a video signal to the display device 20C via the analysis device 30, and displayed on the display device 20C.
  • the display device 20C has a larger screen size than the display device 20.
  • the image displayed on this display device 20C is captured by an imaging device 40A and an imaging device 40B, and the image data of the image captured by the imaging device 40A is sent to the analysis device 30A, and the image data of the image captured by the imaging device 40B is It is transmitted to the analysis device 30B.
  • the analysis device 30A and the analysis device 30B each perform image analysis (eg, OCR).
  • the display device 20C has a larger screen size than the display device 20
  • the images captured by the imaging device 40A and the imaging device 40B can be captured with higher definition than the captured image of the display screen of the display device 20. Therefore, the result of analyzing the captured images captured by the imaging device 40A and the imaging device 40B is likely to be more accurate than analyzing the image captured by the display device 20 as it is.
  • the display device 20 needs to be installed close to the device control device 10, the display device 20C, unlike the display device 20, can be installed at a position away from the device control device 10. Therefore, the display device 20C can be installed without any locational restrictions.
  • the analysis device 30A or the analysis device 30B in this modification may include a comparison unit 35.
  • the comparison unit 35 may compare the image captured by the imaging device 40A or the imaging device 40B with the captured image generated by the capture unit 12.
  • the comparing unit 35 may compare the entire image captured by the imaging device 40A or 40B with the captured image generated by the capturing unit 12, or may compare only the portion with poor reading accuracy. good.
  • the analysis unit 32 can perform analysis using an image that is easy to read, and thus can improve the accuracy of image reading.
  • the analysis device 30 further includes a confirmation unit that confirms the data analyzed by the analysis unit 32. Good too. Then, the analysis result may be presented to the presentation unit 34 after being confirmed by the confirmation unit.
  • the verification unit can, for example, verify the analysis results.
  • the confirmation unit can, for example, verify the analysis results.
  • data A to data E are numerical data
  • the verification unit may perform calculations on data A to data E. Specifically, when monitoring the production status of a given product, data A is the number of non-defective items, data B is the number of defective items with defective item 1, data C is the number of defective items with defective item 2, and data D is the number of defective items with defective item 3.
  • the numerical value indicated by data E should be the same value as the sum of the numerical values indicated by data A to data D. Therefore, by comparing the sum of the numerical values of the analysis results of data A to data D with the numerical value of the analysis result of data E, it is possible to confirm whether or not the analysis results are correct.
  • the verification unit performs verification using the analysis results within the captured image.
  • the verification section also verifies the analysis results by comparing the analysis results of multiple captured images. It's okay.
  • the confirmation unit may perform the verification by comparing the analysis results with past test results. Specifically, when the total number of tests is output as an analysis result as in the example above, the analysis result can be compared with the past analysis results and confirmed that the value is larger than the past analysis results. It may also be determined whether or not it is correct. Note that the past analysis results to be compared may be verified data. Furthermore, if the past analysis results to be compared have not been verified, the correctness may be determined based on whether or not a value larger than the past analysis results has been continuously output.
  • the analysis device 30 may store the analysis result in the storage unit. As a result, it becomes possible to verify the malfunction of the analysis unit thereafter.
  • Signal processing systems 1, 1A, and 1B include distributors 11, 11A, and 11B that distribute video signals to be displayed on display devices 20, 20A, 20B, and 20C, and It includes a capture unit 12, 12A, 12B that generates an analyzable captured image, and an output unit 13, 13B that outputs the captured image to an analysis device 30, 30', 30A, 30B that analyzes the captured image. .
  • a signal processing system 1, 1A, 1B according to an aspect 2 of the present disclosure is provided with the analysis device 30, 30', 30A, 30B in the aspect 1, and a plurality of the analysis devices 30, 30', 30A, 30B are provided. It is also possible to include a comparison unit 35 that is capable of performing the above analyzes and compares the results of the plurality of analyses.
  • the analysis devices 30, 30', 30A, and 30B may be capable of performing a plurality of analyzes using different processing methods. .
  • the capture unit 12, 12A, 12B captures the video signal at a predetermined frequency.
  • the predetermined frequency may be lower than the refresh rate of the display device 20, 20A, 20B, 20C.
  • the analysis device 30, 30', 30A, 30B selects a capture image that satisfies a predetermined condition from among the generated capture images. Only images may be analyzed.
  • the analysis device 30, 30', 30A, 30B is configured to perform the analysis of the previous captured image among the generated captured images. It is also possible to analyze only the captured images that are different from the above.
  • the analysis device 30, 30', 30A, 30B analyzes only a predetermined region of the captured image. You may.
  • the analysis device 30, 30', 30A, 30B performs OCR (Optical Character Recognition/Reader) processing.
  • character data may be generated from the captured image.
  • the analysis device 30, 30', 30A, 30B may include a modification unit 36 that modifies the character data when the arrangement of the types of character data obtained as a result of OCR processing is different from the arrangement obtained as a result of other OCR processing for the same item.
  • a signal processing system 1, 1A, 1B includes a first distributor (distributor 11) that distributes a first video signal, and a first distributor (distributor 11) that distributes a first video signal, in any one of the aforementioned aspects 1 to 9. and a second distributor (distributor 11A) for distributing the video signal distributed by the first distributor and the video signal distributed by the second distributor.
  • the captured image may be generated from both.
  • the capture unit 12, 12A, 12B is configured to control the capture unit 12, 12A, 12B regardless of the format of the plurality of video signals. Images may be created in the same manner.
  • a signal processing method is a signal processing method for generating an image to be analyzed, and includes a distributor 11 that distributes a video signal to be displayed on display devices 20, 20A, 20B, and 20C; 11A, 11B, a distribution step of distributing the video signal by the distributors 11, 11A, 11B, a capture step of generating an analyzable capture image from the distributed video signal, and a capture step of generating an analyzable capture image from the distributed video signal. and an output step of outputting the captured image to an analysis device that analyzes the captured image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Character Discrimination (AREA)

Abstract

The present invention easily and accurately acquires information. This signal processing system comprises a distributor for distributing a video signal which is to be displayed by a display device, a capture unit for generating an analyzable captured image from the distributed video signal, and an output unit for outputting a captured image to an analysis device that analyzes the captured image.

Description

信号処理システムおよび信号処理方法Signal processing system and signal processing method
 本開示は、解析対象となるキャプチャ画像を生成する信号処理システム等に関する。 The present disclosure relates to a signal processing system and the like that generate captured images to be analyzed.
 撮影画像から情報を読み取り、データ化する技術が知られている。特許文献1には、カメラで撮影した画像に対し、OCR処理を行い、文字データ化する技術が開示されている。 Technology for reading information from captured images and converting them into data is known. Patent Document 1 discloses a technique for performing OCR processing on an image taken with a camera and converting it into character data.
特開2014-153734号公報Japanese Patent Application Publication No. 2014-153734
 本開示の一態様に係る信号処理システムは、表示装置に表示させる映像信号を分配する分配器と、分配された前記映像信号から解析可能なキャプチャ画像を生成するキャプチャ部と、前記キャプチャ画像を、該キャプチャ画像を解析する解析装置へ出力する出力部と、を備える。 A signal processing system according to an aspect of the present disclosure includes: a distributor that distributes a video signal to be displayed on a display device; a capture unit that generates a capture image that can be analyzed from the distributed video signal; and an output unit that outputs the captured image to an analysis device that analyzes the captured image.
 本開示の一態様に係る信号処理方法は、解析対象となる画像を生成するための信号処理方法であって、表示装置に表示させる映像信号を分配する分配器を備え、前記分配器で前記映像信号を分配する分配ステップと、分配された前記映像信号から解析可能なキャプチャ画像を生成するキャプチャステップと、前記キャプチャ画像を、該キャプチャ画像を解析する解析装置へ出力する出力ステップと、を含む。 A signal processing method according to an aspect of the present disclosure is a signal processing method for generating an image to be analyzed, and includes a distributor that distributes a video signal to be displayed on a display device, and the distributor The method includes a distribution step of distributing a signal, a capture step of generating an analyzable captured image from the distributed video signal, and an output step of outputting the captured image to an analysis device that analyzes the captured image.
本開示の実施形態に係る信号処理システムの全体概要を示す概要図である。1 is a schematic diagram showing an overall outline of a signal processing system according to an embodiment of the present disclosure. 解析装置の要部構成を示す機能ブロック図である。FIG. 2 is a functional block diagram showing the configuration of main parts of the analysis device. キャプチャ画像の一部のみを解析する場合の例を示す図である。FIG. 7 is a diagram illustrating an example of analyzing only a part of a captured image. キャプチャ画像を解析するか否かの判定の例を示す図である。FIG. 6 is a diagram illustrating an example of determining whether or not to analyze a captured image. キャプチャ画像を解析するか否かの判定の例を示す図である。FIG. 6 is a diagram illustrating an example of determining whether or not to analyze a captured image. 解析部の解析結果の提示例を示す図である。FIG. 6 is a diagram illustrating an example of presentation of analysis results of an analysis unit. 解析装置の要部構成を示す機能ブロック図である。FIG. 2 is a functional block diagram showing the configuration of main parts of the analysis device. 解析結果を修正する場合の例を示す図である。FIG. 6 is a diagram illustrating an example of modifying analysis results. 別の信号処理システムの全体概要を示す概要図である。FIG. 2 is a schematic diagram showing an overall outline of another signal processing system. さらに別の信号処理システムの全体概要を示す概要図である。FIG. 3 is a schematic diagram showing an overall outline of yet another signal processing system. 信号処理システムの変形例の全体概要を示す概要図である。FIG. 3 is a schematic diagram showing an overall outline of a modification of the signal processing system.
 特許文献1に記載されているような、カメラで撮影を行う構成の場合、カメラを設置する場所が必要となる、レンズ汚れ等による撮影不良への対応が必要となる等の手間が生じる。また、読取り精度が悪いとOCR処理の精度が下がるため、カメラのピントを正確に合わせる処理が必要となり、処理が煩雑になる。さらに、読取り精度を上げるためには、高精細な画像を取得することが望ましく、高機能な撮影機材が必要になる。 In the case of a configuration in which photography is performed using a camera, such as that described in Patent Document 1, there is a need for a place to install the camera, and it is necessary to deal with poor photography due to lens dirt, etc., which causes trouble. In addition, if the reading accuracy is poor, the accuracy of OCR processing will be lowered, so it will be necessary to accurately focus the camera, which will make the processing complicated. Furthermore, in order to improve reading accuracy, it is desirable to obtain high-definition images, which requires highly functional photographic equipment.
 よって、特許文献1に記載された技術については、容易かつ正確に情報を取得するという点で、改良の余地がある。 Therefore, there is room for improvement in the technology described in Patent Document 1 in terms of easily and accurately acquiring information.
 本開示の一態様に係る信号処理システムおよび信号処理方法は、映像信号を分配し、分配した映像信号をキャプチャしたキャプチャ画像に対し解析を行うので、カメラで画像を撮影することに伴う弊害が生じない。また、本開示の一態様に係る信号処理システムおよび信号処理方法は、映像信号をそのままキャプチャしたキャプチャ画像を解析するので、画像に含まれている情報を正確に取得できる。よって、本開示の一態様に係る信号処理システムおよび信号処理方法は、カメラで撮影した撮影画像を解析する場合と比較して、容易かつ正確に解析を行い、正確な情報を取得することができるという効果を奏する。 The signal processing system and signal processing method according to one aspect of the present disclosure distributes a video signal and performs analysis on a captured image obtained by capturing the distributed video signal. do not have. Further, the signal processing system and signal processing method according to one aspect of the present disclosure analyze a captured image obtained by capturing a video signal as it is, so that information included in the image can be accurately acquired. Therefore, the signal processing system and signal processing method according to one aspect of the present disclosure can easily and accurately perform analysis and obtain accurate information, compared to the case of analyzing captured images captured with a camera. This effect is achieved.
 まず、図1を参照して、本実施形態に係る信号処理システム1について説明する。図1は、信号処理システム1を説明するための概要図である。図1に示すように、信号処理システム1は、分配器11、キャプチャ部12、および出力部13を含む。以下では、分配器11、キャプチャ部12、および出力部13を含むシステムを信号処理システム1として説明するが、信号処理システム1に解析装置30が含まれる構成であってもよい。 First, with reference to FIG. 1, a signal processing system 1 according to the present embodiment will be described. FIG. 1 is a schematic diagram for explaining a signal processing system 1. As shown in FIG. As shown in FIG. 1, the signal processing system 1 includes a distributor 11, a capture section 12, and an output section 13. Although a system including the distributor 11, the capture unit 12, and the output unit 13 will be described below as the signal processing system 1, the signal processing system 1 may include the analysis device 30.
 分配器11は、機器制御装置10から表示装置20へ送信される映像信号を分配する(分配ステップ)。分配された映像信号は、表示装置20とキャプチャ部12とへ送信される。 The distributor 11 distributes the video signal transmitted from the device control device 10 to the display device 20 (distribution step). The distributed video signal is transmitted to the display device 20 and the capture unit 12.
 ここで、機器制御装置10は、例えば、生産装置を制御する制御装置である。機器制御装置10は、例えば、制御対象から取得した取得データをユーザに提示する画面を生成し、当該画面を表示装置20で表示するための映像信号を表示装置20に送信する。これにより、当該画面が表示装置20に表示される。 Here, the equipment control device 10 is, for example, a control device that controls production equipment. The device control device 10 , for example, generates a screen that presents acquired data acquired from a controlled object to a user, and transmits a video signal to the display device 20 for displaying the screen on the display device 20 . Thereby, the screen is displayed on the display device 20.
 表示装置20は、いわゆるディスプレイであり、各種情報等を表示できるものである。表示装置20は、機器制御装置10と一体となって取付けられているものであってもよいし、機器制御装置10と有線により接続されているものであってもよい。 The display device 20 is a so-called display, and is capable of displaying various information. The display device 20 may be attached integrally with the device control device 10, or may be connected to the device control device 10 by wire.
 キャプチャ部12は、分配器11によって分配された映像信号からキャプチャ画像を生成する(キャプチャステップ)。生成したキャプチャ画像は出力部13へ送られる。キャプチャ部12は、入力された映像信号を所定の頻度でキャプチャし、キャプチャ画像を生成する。所定の頻度は、表示装置20のリフレッシュレートよりも小さくてよい。換言すれば、キャプチャ部12による1分間当たりのキャプチャ回数は、表示装置20のリフレッシュレートよりも小さくてよい。表示装置20では、リフレッシュレートよりも高い頻度で画像が変更される可能性はなく、リフレッシュレートよりも高い頻度でキャプチャを行う意味がないためである。つまり、所定の頻度は、表示装置20のリフレッシュレートよりも小さくすることで、無駄を排除できる。所定の頻度は、所定の周波数であってもよい。 The capture unit 12 generates a captured image from the video signal distributed by the distributor 11 (capture step). The generated captured image is sent to the output unit 13. The capture unit 12 captures the input video signal at a predetermined frequency and generates a captured image. The predetermined frequency may be lower than the refresh rate of display device 20. In other words, the number of captures per minute by the capture unit 12 may be smaller than the refresh rate of the display device 20. This is because in the display device 20, there is no possibility that images will be changed more frequently than the refresh rate, and there is no point in capturing images more frequently than the refresh rate. In other words, waste can be eliminated by making the predetermined frequency smaller than the refresh rate of the display device 20. The predetermined frequency may be a predetermined frequency.
 また、キャプチャ部12がキャプチャを行う間隔は、調整可能であってもよい。キャプチャを行う間隔を調整することにより、キャプチャ部12は、適切な量のキャプチャ画像を生成し出力部13へ送ることができる。これにより、信号処理システム1では、キャプチャ部12および解析装置30が過負荷状態となる可能性を低減することができる。 Furthermore, the interval at which the capture unit 12 performs capture may be adjustable. By adjusting the capture interval, the capture unit 12 can generate an appropriate amount of captured images and send them to the output unit 13. Thereby, in the signal processing system 1, the possibility that the capture unit 12 and the analysis device 30 will be in an overload state can be reduced.
 また、キャプチャ部12が生成したキャプチャ画像は、所定の記憶部(図示せず)に記録されるものであってもよい。 Furthermore, the captured image generated by the capture unit 12 may be recorded in a predetermined storage unit (not shown).
 出力部13は、キャプチャ部12で生成されたキャプチャ画像を解析装置30へ出力する(出力ステップ)。 The output unit 13 outputs the captured image generated by the capture unit 12 to the analysis device 30 (output step).
 解析装置30は、キャプチャ部12で生成されたキャプチャ画像の解析を行う。図2を参照して、解析装置30の詳細について説明する。図2は、解析装置30の要部構成を示す機能ブロック図である。 The analysis device 30 analyzes the captured image generated by the capture unit 12. Details of the analysis device 30 will be described with reference to FIG. 2. FIG. 2 is a functional block diagram showing the main part configuration of the analysis device 30.
 図2に示すように、解析装置30は、取得部31、解析部32、および提示部34を含む。また、解析部32は、画像判定部33を含んでもよい。 As shown in FIG. 2, the analysis device 30 includes an acquisition section 31, an analysis section 32, and a presentation section 34. Furthermore, the analysis section 32 may include an image determination section 33.
 取得部31は、出力部13からキャプチャ画像を取得し、解析部32へと送信する。 The acquisition unit 31 acquires the captured image from the output unit 13 and transmits it to the analysis unit 32.
 解析部32は、取得部31を介して取得したキャプチャ画像の解析を行う。解析部32が行う解析の例としては、例えばOCR(Optical Character Recognition/Reader)処理が挙げられる。OCR処理により、キャプチャ画像から当該キャプチャ画像に含まれる文字を示す文字データを取得することが可能となる。解析部32が行う解析は、OCRに限られるものではなく、キャプチャ画像に含まれる模様をデータとして取得するための解析であってもよい。 The analysis unit 32 analyzes the captured image acquired via the acquisition unit 31. An example of the analysis performed by the analysis unit 32 is, for example, OCR (Optical Character Recognition/Reader) processing. OCR processing makes it possible to obtain character data indicating characters included in the captured image from the captured image. The analysis performed by the analysis unit 32 is not limited to OCR, and may be an analysis for acquiring a pattern included in a captured image as data.
 一例として、解析部32は、キャプチャ画像に含まれるアイコン等の模様を用いた解析を行ってもよい。具体的には、キャプチャ画像に監視対象である設備(機器制御装置10)の状態を示すアイコン等の模様が含まれている場合、解析部32は当該模様を解析対象のデータとして取得してもよい。この場合、解析部32は、解析方法として、取得した模様データを用いたパターンマッチングを行うことで、監視対象である設備の状態をデータ化してもよい。 As an example, the analysis unit 32 may perform analysis using a pattern such as an icon included in the captured image. Specifically, if the captured image includes a pattern such as an icon indicating the status of the equipment to be monitored (equipment control device 10), the analysis unit 32 may acquire the pattern as data to be analyzed. good. In this case, the analysis unit 32 may convert the state of the equipment to be monitored into data by performing pattern matching using the acquired pattern data as an analysis method.
 また、解析部32は、処理方式の異なる複数の解析を並行して行うことができるものであってもよいし、同じ処理方式であっても処理アルゴリズムの異なる複数の解析を並行して行うことができるものであってもよい。処理アルゴリズムの異なる解析の例としては、パラメータの数値が異なる解析が挙げられる。 Further, the analysis unit 32 may be capable of performing multiple analyzes using different processing methods in parallel, or may be capable of performing multiple analyzes using different processing algorithms in parallel even if the processing method is the same. It may be possible to do so. Examples of analyzes using different processing algorithms include analyzes using different parameter values.
 また、解析部32は、キャプチャ画像の一部のみ解析するものであってもよい。図3を参照して、1枚のキャプチャ画像の一部のみを解析する例について説明する。図3の301はキャプチャ画像を示す。解析部32は、キャプチャ画像の一部のみを解析する場合、例えば、キャプチャ画像301の領域311についてのみ、つまりデータAおよびデータBについてのみ解析する。解析の対象となる領域は予め定められていればよい。キャプチャ画像の一部のみを解析の対象とすることで、キャプチャ画像全体を解析する場合と比較して、解析にかかる負荷を軽減できる。 Furthermore, the analysis unit 32 may analyze only a part of the captured image. An example in which only a part of one captured image is analyzed will be described with reference to FIG. 3. 301 in FIG. 3 indicates a captured image. When analyzing only part of the captured image, the analysis unit 32 analyzes only the region 311 of the captured image 301, that is, only data A and data B, for example. The area to be analyzed may be determined in advance. By analyzing only a portion of the captured image, the load on the analysis can be reduced compared to the case where the entire captured image is analyzed.
 画像判定部33は、取得部31を介して取得したキャプチャ画像が、解析対象であるキャプチャ画像であるか否かを判定するものである。画像判定部33を備える場合、解析部32は、画像判定部33が解析対象であると判定したキャプチャ画像のみ、解析を行うものであってもよい。 The image determination unit 33 determines whether the captured image acquired via the acquisition unit 31 is a captured image to be analyzed. When the image determination unit 33 is provided, the analysis unit 32 may analyze only the captured image that the image determination unit 33 determines to be an analysis target.
 画像判定部33は、所定の条件を満たすキャプチャ画像を解析対象と判定してもよい。所定の条件を満たすキャプチャ画像のみ解析対象とすることで、全てのキャプチャ画像を解析する場合と比較して、解析にかかる負荷を軽減するとともに、意図しない画面からの誤データの読み込みを低減できる。ここで、所定の条件とは、例えば、予め定められた位置に所定の印があるものであってもよいし、キャプチャ画像の全体構成が、予め定められた構成と一致するものであってもよいし、予め定められた時刻に生成されたキャプチャ画像であってもよい。 The image determination unit 33 may determine captured images that meet predetermined conditions to be analyzed. By only analyzing captured images that meet predetermined conditions, it is possible to reduce the load on analysis and reduce the reading of erroneous data from unintended screens compared to the case where all captured images are analyzed. Here, the predetermined condition may be, for example, that there is a predetermined mark at a predetermined position, or that the overall configuration of the captured image matches the predetermined configuration. Alternatively, it may be a captured image generated at a predetermined time.
 また、画像判定部33は、所定の頻度で生成されるキャプチャ画像のうち、1つ前のキャプチャ画像と異なるキャプチャ画像を解析対象と判定してもよい。例えば、図4に示すようなキャプチャ画像401と、次の生成タイミングで生成されたキャプチャ画像402とがあるとする。キャプチャ画像401とキャプチャ画像402とは、時刻を除いて表示内容が同じ画像であるので、画像判定部33は、キャプチャ画像402を解析対象ではないと判定する。 Furthermore, the image determination unit 33 may determine, among captured images generated at a predetermined frequency, a captured image that is different from the previous captured image to be an analysis target. For example, assume that there is a captured image 401 as shown in FIG. 4 and a captured image 402 generated at the next generation timing. Since the captured image 401 and the captured image 402 have the same display content except for the time, the image determination unit 33 determines that the captured image 402 is not an analysis target.
 一方、図5に示すようなキャプチャ画像501と、次の生成タイミングで生成されたキャプチャ画像502とがあるとする。キャプチャ画像501とキャプチャ画像502とは、同じ画像とは言えない。具体的には、キャプチャ画像501とキャプチャ画像502とでは、画像の形状、縦横比、および文字を示す領域の位置等が異なる。そこで、画像判定部33は、キャプチャ画像502を解析対象であると判定する。これにより、解析部32によってデータ化する必要がないキャプチャ画像が解析対象として登録および解析される時間が削減される。また、不要なキャプチャ画像が解析されることによって誤読み取りデータが発生する可能性が低減される。キャプチャ画像501とキャプチャ画像502とが同じ画像とは言えないと判断される条件は、上述したもの以外の条件であってもよい。例えば、2つのキャプチャ画像を重ねたときに、これらのキャプチャ画像間において、時刻を示す領域以外の領域の少なくとも一部に差異があれば、これらのキャプチャ画像は同じ画像とは言えないと判断されてよい。 On the other hand, assume that there is a captured image 501 as shown in FIG. 5 and a captured image 502 generated at the next generation timing. Captured image 501 and captured image 502 cannot be said to be the same image. Specifically, the captured image 501 and the captured image 502 differ in image shape, aspect ratio, position of the area showing characters, and the like. Therefore, the image determination unit 33 determines that the captured image 502 is an analysis target. This reduces the time required for the analysis unit 32 to register and analyze captured images that do not need to be converted into data as analysis targets. Furthermore, the possibility that erroneously read data will occur due to analysis of unnecessary captured images is reduced. The conditions for determining that the captured image 501 and the captured image 502 cannot be said to be the same image may be conditions other than those described above. For example, when two captured images are superimposed, if there is a difference in at least part of the area other than the area indicating the time, it is determined that these captured images cannot be said to be the same image. It's fine.
 提示部34は、解析部32における解析結果を提示する。図6に、提示部34が提示する、解析部32の解析結果の例を示す。図6に示す例では、キャプチャ画像の解析の結果が監視画面601として提示されており、右上に時刻611が提示されるとともに、略中央に示された枠612~枠616に、データA~データEの英数字が提示されている。 The presentation unit 34 presents the analysis results in the analysis unit 32. FIG. 6 shows an example of the analysis results of the analysis unit 32 presented by the presentation unit 34. In the example shown in FIG. 6, the result of the analysis of the captured image is presented as a monitoring screen 601, and the time 611 is presented in the upper right corner, and data A to data The alphanumeric character E is presented.
 〔解析装置30の別の形態〕
 図7に解析装置30の別の形態である解析装置30’を示す。図7に示す解析装置30’では、図2に示した解析装置30に、比較部35および修正部36が加えられている。
[Another form of analysis device 30]
FIG. 7 shows an analysis device 30' which is another form of the analysis device 30. In an analysis device 30' shown in FIG. 7, a comparison section 35 and a correction section 36 are added to the analysis device 30 shown in FIG.
 比較部35は、解析部32において処理方式の異なる複数の解析を並行して実行可能である場合に、複数の解析の結果を比較する。そして、比較した結果を提示部34で提示する。ユーザは、比較結果を確認することにより解析の精度を検証することができる。また、解析部32は、処理方式が同じ複数の解析を平行して実行可能であってもよい。一例として、解析部32は、処理方式としてOCR処理を採用した、異なる複数のソフトウェアを用いて解析を行ってもよい。この場合、比較部35は、異なる複数のソフトウェアを用いて解析部32が実行した複数の解析の結果を比較してもよい。 The comparison unit 35 compares the results of multiple analyzes when the analysis unit 32 can execute multiple analyzes using different processing methods in parallel. The comparison results are then presented by the presentation unit 34. The user can verify the accuracy of the analysis by checking the comparison results. Furthermore, the analysis unit 32 may be able to execute multiple analyzes using the same processing method in parallel. As an example, the analysis unit 32 may perform analysis using a plurality of different software that employs OCR processing as a processing method. In this case, the comparing unit 35 may compare the results of multiple analyzes performed by the analyzing unit 32 using multiple different software.
 OCR処理は、形状から文字を推測して文字データを生成するものなので、表示が不明瞭の場合、形状が似ている文字を誤って判定することがある。例えば、数字の「8」と英字の「S」とを誤って判定することがある。そこで、修正部36は、キャプチャ画像が、複数項目のデータを表示するものである場合、OCR処理の結果得られた文字データの種類の順序が、同一項目における他のOCR処理の結果得られた種類の順序と異なるとき、OCR処理の結果である文字データを修正する。ここで、文字データの種類とは、数字、英字等のことを言う。つまり、文字データが数字、数字、英字の順序で並んでいるデータと、数字、英字、数字の順序で並んでいるデータとは、文字データの種類の順序が異なることになる。そして、例えば、OCR処理の結果である文字データが、アルファベットと数字で構成されている場合、同一項目の文字データは、種類が同じ並びとなっていることが多い。よって、他のOCRの結果と文字データの種類の並びが異なっている場合、OCR処理が誤っている可能性が高い。そこで、修正部36は、このようなOCR処理が誤っている可能性が高い場合に、文字データを修正する。 Since OCR processing generates character data by inferring characters from their shapes, if the display is unclear, characters with similar shapes may be mistakenly determined. For example, the number "8" and the alphabet "S" may be mistakenly determined. Therefore, when the captured image displays data of multiple items, the correction unit 36 adjusts the order of the types of character data obtained as a result of OCR processing to the order of the types of character data obtained as a result of other OCR processing for the same item. When the order of types is different, the character data that is the result of OCR processing is corrected. Here, the type of character data refers to numbers, alphabetic characters, etc. In other words, the order of the types of character data is different between data in which character data is arranged in the order of numbers, numbers, and alphabets, and data in which the characters are arranged in the order of numbers, alphabets, and numbers. For example, when the character data that is the result of OCR processing is composed of alphabets and numbers, the character data of the same item are often arranged in the same type. Therefore, if the order of the types of character data is different from other OCR results, there is a high possibility that the OCR processing is incorrect. Therefore, the modification unit 36 modifies the character data when there is a high possibility that such OCR processing is erroneous.
 図8に修正例を示す。例えば、データCのあるタイミングでの解析部32による解析の結果が、801に示すように「98ST7」で、次のタイミングでの解析部32による解析の結果が、802に示すように「77ST5」で、さらに次のタイミングでの解析部32による解析の結果が、803に示すように「668T5」であったとする。801および802に示すように、データCは「数字、数字、英字、英字、数字」で示されると考えられるところ803では「数字、数字、数字、英字、数字」で示されている。この場合、修正部36は、803の3つ目の数字は英字の誤りである可能性が高いと判断し、801および802の結果に基づき、3つ目の数字を英字の「S」に修正する。修正部36は、文字データの修正を行った場合、修正を行った対象の文字データと、修正後の文字データとを対応付け、修正箇所のログとして記憶部(図示せず)に記憶させてもよい。当該ログには、修正が行われた文字データに対応するキャプチャ画像および修正が行われた日時等の情報が含まれていてもよい。ユーザは、当該ログを確認することで、OCR処理による文字の識別率を認識することができる。 An example of modification is shown in Figure 8. For example, the result of analysis by the analysis unit 32 at a certain timing of data C is “98ST7” as shown in 801, and the result of analysis by the analysis unit 32 at the next timing is “77ST5” as shown in 802. Assume that the result of analysis by the analysis unit 32 at the next timing is "668T5" as shown in 803. As shown in 801 and 802, the data C is thought to be represented by "numbers, numbers, alphabets, alphabets, numbers", but in 803 it is represented by "numbers, numbers, numbers, alphabets, numbers". In this case, the correction unit 36 determines that the third number of 803 is likely to be an incorrect alphabetic character, and based on the results of 801 and 802, corrects the third number to the alphabetic character "S". do. When modifying character data, the modifying unit 36 associates the modified character data with the modified character data and stores it in a storage unit (not shown) as a log of the modified parts. Good too. The log may include information such as a captured image corresponding to the modified character data and the date and time when the modification was performed. By checking the log, the user can recognize the character identification rate by OCR processing.
 以上のように、本実施形態に係る信号処理システム1は、表示装置20に表示させる映像信号を分配する分配器11と、分配された映像信号から解析可能なキャプチャ画像を生成するキャプチャ部12と、キャプチャ画像を、該キャプチャ画像を解析する解析装置30へ出力する出力部13と、を備える。 As described above, the signal processing system 1 according to the present embodiment includes the distributor 11 that distributes a video signal to be displayed on the display device 20, and the capture unit 12 that generates an analyzable capture image from the distributed video signal. , and an output unit 13 that outputs the captured image to an analysis device 30 that analyzes the captured image.
 これにより、映像信号を分配し、分配した映像信号からキャプチャ画像を生成し、出力するので、カメラで撮影した画像よりも、より解析に適した画像を解析装置30へ出力できる。解析装置30では、映像信号から生成されたキャプチャ画像を解析することができるので、画像に含まれている情報を正確に取得することが可能となる。よって、容易かつ正確に、情報を取得することができる。 As a result, the video signal is distributed and a captured image is generated from the distributed video signal and output, so an image more suitable for analysis than an image taken with a camera can be output to the analysis device 30. Since the analysis device 30 can analyze the captured image generated from the video signal, it is possible to accurately acquire the information included in the image. Therefore, information can be easily and accurately acquired.
 〔変形例1〕
 図9を参照して、信号処理システム1の変形例である信号処理システム1Aについて説明する。図9は、信号処理システム1Aの全体概要を示す概要図である。
[Modification 1]
With reference to FIG. 9, a signal processing system 1A that is a modification of the signal processing system 1 will be described. FIG. 9 is a schematic diagram showing an overall outline of the signal processing system 1A.
 図9に示すように、信号処理システム1Aでは、機器制御装置10から出力され表示装置20で表示される映像信号(第1映像信号)が分配器11(第1分配器)により分配されてキャプチャ部12に入力されるとともに、機器制御装置10とは異なる機器制御装置10Aから出力され表示装置20Aで表示される映像信号(第2映像信号)が分配器11A(第2分配器)により分配されてキャプチャ部12に入力される。つまり、キャプチャ部12には、機器制御装置10から出力された第1映像信号と、機器制御装置10Aから出力された第2映像信号との2つの映像信号が入力される。換言すれば、信号処理システム1Aは、2つの分配器11(第1分配器、第2分配器)を含み、キャプチャ部12に、2つの機器制御装置10(10A)からの映像信号(第1映像信号、第2映像信号)が入力される。キャプチャ部12には、機器制御装置10からの映像信号と、機器制御装置10Aからの映像信号とが切り換えられて1つずつ入力されるものであってもよい。キャプチャ部12は、分配器11(第1分配器)で分配された第1映像信号、および分配器11A(第2分配器)で分配された映像信号の双方からそれぞれキャプチャ画像を生成してもよい。 As shown in FIG. 9, in the signal processing system 1A, a video signal (first video signal) output from the device control device 10 and displayed on the display device 20 is distributed and captured by a distributor 11 (first distributor). A video signal (second video signal) that is input to the unit 12, output from the equipment control device 10A different from the equipment control device 10, and displayed on the display device 20A is distributed by the distributor 11A (second distributor). and is input to the capture unit 12. That is, two video signals, the first video signal output from the device control device 10 and the second video signal output from the device control device 10A, are input to the capture unit 12. In other words, the signal processing system 1A includes two distributors 11 (first distributor, second distributor), and the capture unit 12 receives video signals (first distributor) from two device control devices 10 (10A). video signal, second video signal) is input. The video signal from the device control device 10 and the video signal from the device control device 10A may be switched and input to the capture unit 12 one by one. The capture unit 12 may generate capture images from both the first video signal distributed by the distributor 11 (first distributor) and the video signal distributed by the distributor 11A (second distributor). good.
 これにより、機器制御装置10(10A)が複数存在し、それぞれから映像信号が表示装置20(20A)に送信される場合であっても、1つの信号処理システム1Aにより映像信号を分配し、解析させることができる。 As a result, even if there are multiple device control devices 10 (10A) and video signals are sent from each to the display device 20 (20A), the video signals are distributed and analyzed by one signal processing system 1A. can be done.
 また、キャプチャ部12は、入力された第1映像信号と第2映像信号とを比較してもよい。一例として、キャプチャ部12は、第1映像信号および第2映像信号を用いて2つのキャプチャ画像を生成し、これら2つのキャプチャ画像の間において共通する箇所、例えば2つのキャプチャ画像に共通して表示されるアイコン等を比較してもよい。キャプチャ部12は、2つのキャプチャ画像を比較することで、いずれかのキャプチャ画像が劣化しているか否か、すなわち取得した第1映像信号および第2映像信号のうちいずれかが劣化しているか否かを検証することができる。第1映像信号および第2映像信号のうちいずれかが劣化していた場合、キャプチャ部12は、劣化していた映像信号を取得し直してもよい。キャプチャ部12が上述のような処理を行うことで、信号処理システム1Aでは、劣化した映像信号が用いられる可能性を低減することができ、キャプチャ画像の読み取り精度をより向上させることができる。 Additionally, the capture unit 12 may compare the input first video signal and second video signal. As an example, the capture unit 12 generates two captured images using a first video signal and a second video signal, and displays a common part between these two captured images, for example, a common part between the two captured images. You may also compare the icons etc. By comparing the two captured images, the capture unit 12 determines whether one of the captured images is degraded, that is, whether one of the acquired first video signal and second video signal is degraded. It is possible to verify whether If either the first video signal or the second video signal is degraded, the capture unit 12 may re-acquire the degraded video signal. By the capture unit 12 performing the above-described processing, the signal processing system 1A can reduce the possibility that degraded video signals will be used, and can further improve the accuracy of reading captured images.
 キャプチャ部12に入力される映像信号は、2つに限らず、3つ以上であってもよい。 The number of video signals input to the capture unit 12 is not limited to two, and may be three or more.
 また、キャプチャ部12は、機器制御装置10から送信された映像信号の形式と、機器制御装置10Aから送信された映像信号の形式とが異なる形式であっても、同じ方式でキャプチャ画像を生成(作成)するものであってよい。キャプチャ部12が生成するキャプチャ画像を同じ方式で生成すれば、当該キャプチャ画像を解析する解析装置30は、同じ方式で生成されたキャプチャ画像を解析すればよいので、処理を容易にすることができる。 Furthermore, even if the format of the video signal transmitted from the device control device 10 and the format of the video signal transmitted from the device control device 10A are different formats, the capture unit 12 generates a captured image using the same method ( (created). If the captured images generated by the capture unit 12 are generated using the same method, the analysis device 30 that analyzes the captured images only needs to analyze the captured images generated using the same method, so that processing can be facilitated. .
 〔変形例2〕
 図10を参照して、信号処理システム1のさらなる変形例である信号処理システム1Bについて説明する。図10は、信号処理システム1Bの全体概要を示す概要図である。
[Modification 2]
With reference to FIG. 10, a signal processing system 1B, which is a further modification of the signal processing system 1, will be described. FIG. 10 is a schematic diagram showing an overall outline of the signal processing system 1B.
 図10に示すように、信号処理システム1Bでは、機器制御装置10から出力され表示装置20で表示される映像信号が分配器11で分配され、キャプチャ部12に入力される。キャプチャ部12で生成されたキャプチャ画像は、出力部13を介して解析装置30に出力され、解析装置30で解析される。 As shown in FIG. 10, in the signal processing system 1B, the video signal output from the device control device 10 and displayed on the display device 20 is distributed by the distributor 11 and input to the capture unit 12. The captured image generated by the capture unit 12 is output to the analysis device 30 via the output unit 13 and analyzed by the analysis device 30.
 また、機器制御装置10Bから出力され表示装置20Bで表示される映像信号が分配器11Bで分配され、キャプチャ部12Bに入力される。キャプチャ部12Bで生成されたキャプチャ画像は、出力部13Bを介して解析装置30に出力され、解析装置30で解析される。 Furthermore, a video signal output from the device control device 10B and displayed on the display device 20B is distributed by the distributor 11B and input to the capture unit 12B. The captured image generated by the capture unit 12B is output to the analysis device 30 via the output unit 13B and analyzed by the analysis device 30.
 このように、信号処理システム1Bは、機器制御装置10(10B)ごとに、分配器11(11B)、キャプチャ部12(12B)、出力部13(13B)を備える。 In this way, the signal processing system 1B includes a distributor 11 (11B), a capture unit 12 (12B), and an output unit 13 (13B) for each device control device 10 (10B).
 これにより、機器制御装置10(10B)が複数存在し、それぞれから映像信号が表示装置20(20B)に送信される場合に、1つの解析装置30で解析させることができる。 As a result, even if there are a plurality of device control devices 10 (10B) and a video signal is transmitted from each to the display device 20 (20B), one analysis device 30 can analyze the video signal.
 〔変形例3〕
 図11を参照して、信号処理システム1のさらなる変形例について説明する。図11は、信号処理システム1のさらなる変形例の全体概要を示す概要図である。
[Modification 3]
A further modification of the signal processing system 1 will be described with reference to FIG. 11. FIG. 11 is a schematic diagram showing an overall outline of a further modified example of the signal processing system 1.
 図11に示すように、本変形例では、信号処理システム1から出力されたキャプチャ画像を、解析装置30を介して映像信号として表示装置20Cに送信し、表示装置20Cで表示する。表示装置20Cは、表示装置20よりも画面サイズが大きい。 As shown in FIG. 11, in this modification, the captured image output from the signal processing system 1 is transmitted as a video signal to the display device 20C via the analysis device 30, and displayed on the display device 20C. The display device 20C has a larger screen size than the display device 20.
 この表示装置20Cで表示されている画像を、撮像装置40A、撮像装置40Bで撮像し、撮像装置40Aで撮像した画像の画像データを解析装置30Aに、撮像装置40Bで撮像した画像の画像データを解析装置30Bに送信する。解析装置30A、および解析装置30Bでは、それぞれ、画像の解析(例えばOCR)を行う。 The image displayed on this display device 20C is captured by an imaging device 40A and an imaging device 40B, and the image data of the image captured by the imaging device 40A is sent to the analysis device 30A, and the image data of the image captured by the imaging device 40B is It is transmitted to the analysis device 30B. The analysis device 30A and the analysis device 30B each perform image analysis (eg, OCR).
 表示装置20Cは、表示装置20よりも画面サイズが大きいため、撮像装置40Aおよび撮像装置40Bで撮像した画像は、表示装置20の表示画面を撮像した撮像画像よりも高精細に撮像できている。よって、撮像装置40A、および撮像装置40Bで撮像した撮像画像を解析した結果は、表示装置20をそのまま撮像した画像を解析するよりも正確である可能性が高い。 Since the display device 20C has a larger screen size than the display device 20, the images captured by the imaging device 40A and the imaging device 40B can be captured with higher definition than the captured image of the display screen of the display device 20. Therefore, the result of analyzing the captured images captured by the imaging device 40A and the imaging device 40B is likely to be more accurate than analyzing the image captured by the display device 20 as it is.
 このように、本変形例では、キャプチャ部12で生成したキャプチャ画像を表示装置20よりも画面サイズの大きい撮像装置40A、および撮像装置40Bで表示することにより、解析をより正確にできる。また、表示装置20は機器制御装置10と近接して設置する必要があるが、表示装置20Cは、表示装置20と異なり、機器制御装置10から離れた位置に設置することが可能である。よって、場所的な制約もなく、表示装置20Cを設置することができる。 As described above, in this modification, analysis can be made more accurate by displaying the captured image generated by the capture unit 12 on the imaging device 40A and the imaging device 40B, which have larger screen sizes than the display device 20. Further, although the display device 20 needs to be installed close to the device control device 10, the display device 20C, unlike the display device 20, can be installed at a position away from the device control device 10. Therefore, the display device 20C can be installed without any locational restrictions.
 また、本変形例における解析装置30Aまたは解析装置30Bは比較部35を備えていてもよい。比較部35は、撮像装置40Aまたは撮像装置40Bで撮像した画像と、キャプチャ部12で生成したキャプチャ画像とを比較してもよい。例えば、比較部35は、撮像装置40Aまたは撮像装置40Bで撮像した画像と、キャプチャ部12で生成したキャプチャ画像との全体を比較してもよいし、読み取り精度の劣る部分のみを比較してもよい。比較部35が当該比較を行うことにより、解析部32は、読み取り易い画像を用いて解析を行うことができるため、画像の読み取り精度を向上させることができる。 Furthermore, the analysis device 30A or the analysis device 30B in this modification may include a comparison unit 35. The comparison unit 35 may compare the image captured by the imaging device 40A or the imaging device 40B with the captured image generated by the capture unit 12. For example, the comparing unit 35 may compare the entire image captured by the imaging device 40A or 40B with the captured image generated by the capturing unit 12, or may compare only the portion with poor reading accuracy. good. When the comparison unit 35 performs the comparison, the analysis unit 32 can perform analysis using an image that is easy to read, and thus can improve the accuracy of image reading.
 本開示は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本開示の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 The present disclosure is not limited to the embodiments described above, and various changes can be made within the scope of the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. are also included within the technical scope of the present disclosure. Furthermore, new technical features can be formed by combining the technical means disclosed in each embodiment.
 例えば、上記の例では、解析部32で解析したデータを提示部34に提示する例を説明したが、解析装置30は、解析部32で解析したデータを確認する確認部をさらに有していてもよい。そして確認部で確認したのちに提示部34に解析結果を提示してもよい。 For example, in the above example, an example was explained in which the data analyzed by the analysis unit 32 is presented to the presentation unit 34, but the analysis device 30 further includes a confirmation unit that confirms the data analyzed by the analysis unit 32. Good too. Then, the analysis result may be presented to the presentation unit 34 after being confirmed by the confirmation unit.
 確認部は、例えば、解析結果を検定することができる。また、確認部は、例えば、解析結果を検算することができる。データA~データEが数値データである場合、確認部はデータA~データEの検算を行ってもよい。具体的には、所定の製品の生産状況を監視する場合、データAが良品数、データBが不良項目1の不良数、データCが不良項目2の不良数、データDが不良項目3の不良数、データEが検査総数を示す場合、データEが示す数値は、データA~データDが示す数値の和と同じ値になるはずである。そのため、データA~データDの解析結果の数値の和と、データEの解析結果の数値を比較することによって、解析結果が正しいものか否かを確認することができる。 The verification unit can, for example, verify the analysis results. Further, the confirmation unit can, for example, verify the analysis results. When data A to data E are numerical data, the verification unit may perform calculations on data A to data E. Specifically, when monitoring the production status of a given product, data A is the number of non-defective items, data B is the number of defective items with defective item 1, data C is the number of defective items with defective item 2, and data D is the number of defective items with defective item 3. When the number and data E indicate the total number of tests, the numerical value indicated by data E should be the same value as the sum of the numerical values indicated by data A to data D. Therefore, by comparing the sum of the numerical values of the analysis results of data A to data D with the numerical value of the analysis result of data E, it is possible to confirm whether or not the analysis results are correct.
 また、上記の例では、確認部がキャプチャ画像内の解析結果で検定をする例を説明した。すなわち、1つの解析結果内のデータ同士を比較することによって解析結果の検定を行う例を説明したが、確認部は、複数のキャプチャ画像それぞれの解析結果を比較することによって解析結果の検定を行ってもよい。また、確認部は、解析結果として、過去の検査結果と比較することによって、確認部は検定を行ってもよい。具体的には、解析結果として上記の例のように検査総数が出力される場合、過去の解析結果と比較して、過去の解析結果よりも大きい値であることを確認することで、解析結果が正しいものか否かを判定してもよい。なお、比較する過去の解析結果は検定済みのデータであってもよい。また、比較する過去の解析結果が検定済みでない場合には、連続して過去の解析結果よりも大きい値が出力されたか否かによって、正しさを判定してもよい。 Furthermore, in the above example, the verification unit performs verification using the analysis results within the captured image. In other words, although we have described an example in which the analysis results are verified by comparing the data within one analysis result, the verification section also verifies the analysis results by comparing the analysis results of multiple captured images. It's okay. Further, the confirmation unit may perform the verification by comparing the analysis results with past test results. Specifically, when the total number of tests is output as an analysis result as in the example above, the analysis result can be compared with the past analysis results and confirmed that the value is larger than the past analysis results. It may also be determined whether or not it is correct. Note that the past analysis results to be compared may be verified data. Furthermore, if the past analysis results to be compared have not been verified, the correctness may be determined based on whether or not a value larger than the past analysis results has been continuously output.
 また、解析装置30は、確認部で行った検定によって解析結果が正しくないと判定された場合に、記憶部にその解析結果を記憶してもよい。その結果、その後、解析部の不具合を検証することが可能になる。 Furthermore, when the analysis result is determined to be incorrect by the verification performed by the confirmation unit, the analysis device 30 may store the analysis result in the storage unit. As a result, it becomes possible to verify the malfunction of the analysis unit thereafter.
 〔まとめ〕
 本開示の態様1に係る信号処理システム1、1A、1Bは、表示装置20、20A、20B、20Cに表示させる映像信号を分配する分配器11、11A、11Bと、分配された前記映像信号から解析可能なキャプチャ画像を生成するキャプチャ部12、12A、12Bと、前記キャプチャ画像を、該キャプチャ画像を解析する解析装置30、30´、30A、30Bへ出力する出力部13、13Bと、を備える。
〔summary〕
Signal processing systems 1, 1A, and 1B according to aspect 1 of the present disclosure include distributors 11, 11A, and 11B that distribute video signals to be displayed on display devices 20, 20A, 20B, and 20C, and It includes a capture unit 12, 12A, 12B that generates an analyzable captured image, and an output unit 13, 13B that outputs the captured image to an analysis device 30, 30', 30A, 30B that analyzes the captured image. .
 本開示の態様2に係る信号処理システム1、1A、1Bは、前記態様1において、前記解析装置30、30´、30A、30Bを備え、前記解析装置30、30´、30A、30Bは、複数の解析を実行可能であるとともに、前記複数の解析の結果を比較する比較部35を備えてもよい。 A signal processing system 1, 1A, 1B according to an aspect 2 of the present disclosure is provided with the analysis device 30, 30', 30A, 30B in the aspect 1, and a plurality of the analysis devices 30, 30', 30A, 30B are provided. It is also possible to include a comparison unit 35 that is capable of performing the above analyzes and compares the results of the plurality of analyses.
 本開示の態様3に係る信号処理システム1、1A、1Bは、前記態様2において、前記解析装置30、30´、30A、30Bは、処理方式の異なる複数の解析を実行可能であってもよい。 In the signal processing systems 1, 1A, and 1B according to aspect 3 of the present disclosure, in the aspect 2, the analysis devices 30, 30', 30A, and 30B may be capable of performing a plurality of analyzes using different processing methods. .
 本開示の態様4に係る信号処理システム1、1A、1Bは、前記態様1から3のいずれかにおいて、前記キャプチャ部12、12A、12Bは、前記映像信号を所定の頻度でキャプチャするものであり、前記所定の頻度は、前記表示装置20、20A、20B、20Cのリフレッシュレートよりも小さくてもよい。 In a signal processing system 1, 1A, 1B according to an aspect 4 of the present disclosure, in any one of the aspects 1 to 3, the capture unit 12, 12A, 12B captures the video signal at a predetermined frequency. , the predetermined frequency may be lower than the refresh rate of the display device 20, 20A, 20B, 20C.
 本開示の態様5に係る信号処理システム1、1A、1Bは、前記態様4において、前記解析装置30、30´、30A、30Bは、生成される前記キャプチャ画像のうち、所定の条件を満たすキャプチャ画像のみを解析してもよい。 In the signal processing system 1, 1A, 1B according to an aspect 5 of the present disclosure, in the aspect 4, the analysis device 30, 30', 30A, 30B selects a capture image that satisfies a predetermined condition from among the generated capture images. Only images may be analyzed.
 本開示の態様6に係る信号処理システム1、1A、1Bは、前記態様5において、前記解析装置30、30´、30A、30Bは、生成される前記キャプチャ画像のうち、1つ前のキャプチャ画像と異なるキャプチャ画像のみを解析してもよい。 In the signal processing system 1, 1A, 1B according to aspect 6 of the present disclosure, in the aspect 5, the analysis device 30, 30', 30A, 30B is configured to perform the analysis of the previous captured image among the generated captured images. It is also possible to analyze only the captured images that are different from the above.
 本開示の態様7に係る信号処理システム1、1A、1Bは、前記態様1から6のいずれかにおいて、前記解析装置30、30´、30A、30Bは、前記キャプチャ画像の所定の領域のみを解析してもよい。 In a signal processing system 1, 1A, 1B according to an aspect 7 of the present disclosure, in any one of the aspects 1 to 6, the analysis device 30, 30', 30A, 30B analyzes only a predetermined region of the captured image. You may.
 本開示の態様8に係る信号処理システム1、1A、1Bは、前記態様1から7のいずれかにおいて、前記解析装置30、30´、30A、30Bは、OCR(Optical Character Recognition/Reader)処理により、前記キャプチャ画像から文字データを生成してもよい。 In a signal processing system 1, 1A, 1B according to an aspect 8 of the present disclosure, in any one of the aspects 1 to 7, the analysis device 30, 30', 30A, 30B performs OCR (Optical Character Recognition/Reader) processing. , character data may be generated from the captured image.
 本開示の態様9に係る信号処理システム1、1A、1Bは、前記態様8において、前記キャプチャ画像が、複数項目のデータを表示するものである場合、前記解析装置30、30´、30A、30Bは、OCR処理の結果得られた文字データの種類の並びが、同一項目における他のOCR処理の結果得られた並びと異なるとき、前記文字データを修正する修正部36を備えてもよい。 In the signal processing system 1, 1A, 1B according to aspect 9 of the present disclosure, in the aspect 8, when the captured image displays data of a plurality of items, the analysis device 30, 30', 30A, 30B may include a modification unit 36 that modifies the character data when the arrangement of the types of character data obtained as a result of OCR processing is different from the arrangement obtained as a result of other OCR processing for the same item.
 本開示の態様10に係る信号処理システム1、1A、1Bは、前記態様1から9のいずれかにおいて、第1映像信号を分配する第1分配器(分配器11)と、第2映像信号を分配する第2分配器(分配器11A)と、を備え、前記キャプチャ部12、12A、12Bは、前記第1分配器で分配された映像信号、および前記第2分配器で分配された映像信号の双方から前記キャプチャ画像を生成してもよい。 A signal processing system 1, 1A, 1B according to an aspect 10 of the present disclosure includes a first distributor (distributor 11) that distributes a first video signal, and a first distributor (distributor 11) that distributes a first video signal, in any one of the aforementioned aspects 1 to 9. and a second distributor (distributor 11A) for distributing the video signal distributed by the first distributor and the video signal distributed by the second distributor. The captured image may be generated from both.
 本開示の態様11に係る信号処理システム1、1A、1Bは、前記態様1から10のいずれかにおいて、前記キャプチャ部12、12A、12Bは、複数の前記映像信号の形式に関わらず、前記キャプチャ画像を同じ方式で作成してもよい。 In a signal processing system 1, 1A, 1B according to an eleventh aspect of the present disclosure, in any one of the aspects 1 to 10, the capture unit 12, 12A, 12B is configured to control the capture unit 12, 12A, 12B regardless of the format of the plurality of video signals. Images may be created in the same manner.
 本開示の態様12に係る信号処理方法は、解析対象となる画像を生成するための信号処理方法であって、表示装置20、20A、20B、20Cに表示させる映像信号を分配する分配器11、11A、11Bを備え、前記分配器11、11A、11Bで前記映像信号を分配する分配ステップと、分配された前記映像信号から解析可能なキャプチャ画像を生成するキャプチャステップと、前記キャプチャ画像を、該キャプチャ画像を解析する解析装置へ出力する出力ステップと、を含む。 A signal processing method according to aspect 12 of the present disclosure is a signal processing method for generating an image to be analyzed, and includes a distributor 11 that distributes a video signal to be displayed on display devices 20, 20A, 20B, and 20C; 11A, 11B, a distribution step of distributing the video signal by the distributors 11, 11A, 11B, a capture step of generating an analyzable capture image from the distributed video signal, and a capture step of generating an analyzable capture image from the distributed video signal. and an output step of outputting the captured image to an analysis device that analyzes the captured image.
 1、1A、1B 信号処理システム
 10 機器制御装置
 11、11A、11B 分配器
 20、20A、20B、20C 表示装置
 12、12A、12B キャプチャ部
 13、13B 出力部
 30、30´、30A、30B 解析装置
 31 取得部
 32 解析部
 33 画像判定部
 34 提示部
 35 比較部
 36 修正部
 40、40A、40B 撮像装置
1, 1A, 1B Signal processing system 10 Equipment control device 11, 11A, 11B Distributor 20, 20A, 20B, 20C Display device 12, 12A, 12B Capture section 13, 13B Output section 30, 30', 30A, 30B Analysis device 31 Acquisition unit 32 Analysis unit 33 Image determination unit 34 Presentation unit 35 Comparison unit 36 Correction unit 40, 40A, 40B Imaging device

Claims (12)

  1.  表示装置に表示させる映像信号を分配する分配器と、
     分配された前記映像信号から解析可能なキャプチャ画像を生成するキャプチャ部と、
     前記キャプチャ画像を、該キャプチャ画像を解析する解析装置へ出力する出力部と、を備える信号処理システム。
    a distributor that distributes a video signal to be displayed on a display device;
    a capture unit that generates an analyzable capture image from the distributed video signal;
    A signal processing system comprising: an output unit that outputs the captured image to an analysis device that analyzes the captured image.
  2.  前記解析装置を備え、
     前記解析装置は、複数の解析を実行可能であるとともに、前記複数の解析の結果を比較する比較部を備える請求項1に記載の信号処理システム。
    comprising the analysis device,
    The signal processing system according to claim 1, wherein the analysis device is capable of performing a plurality of analyzes and includes a comparison unit that compares results of the plurality of analyses.
  3.  前記解析装置は、処理方式の異なる複数の解析を実行可能である、請求項2に記載の信号処理システム。 The signal processing system according to claim 2, wherein the analysis device is capable of executing a plurality of analyzes using different processing methods.
  4.  前記キャプチャ部は、前記映像信号を所定の頻度でキャプチャするものであり、
     前記所定の頻度は、前記表示装置のリフレッシュレートよりも小さい、請求項1~3のいずれか1項に記載の信号処理システム。
    The capture unit captures the video signal at a predetermined frequency,
    The signal processing system according to claim 1, wherein the predetermined frequency is smaller than a refresh rate of the display device.
  5.  前記解析装置は、生成される前記キャプチャ画像のうち、所定の条件を満たすキャプチャ画像のみを解析する請求項4に記載の信号処理システム。 The signal processing system according to claim 4, wherein the analysis device analyzes only captured images that satisfy a predetermined condition among the generated captured images.
  6.  前記解析装置は、生成される前記キャプチャ画像のうち、1つ前のキャプチャ画像と異なるキャプチャ画像のみを解析する請求項5に記載の信号処理システム。 The signal processing system according to claim 5, wherein the analysis device analyzes only a captured image that is different from a previous captured image among the generated captured images.
  7.  前記解析装置は、前記キャプチャ画像の所定の領域のみを解析する請求項1~6のいずれか1項に記載の信号処理システム。 The signal processing system according to claim 1, wherein the analysis device analyzes only a predetermined region of the captured image.
  8.  前記解析装置は、OCR(Optical Character Recognition/Reader)処理により、前記キャプチャ画像から文字データを生成する、請求項1~7のいずれか1項に記載の信号処理システム。 The signal processing system according to any one of claims 1 to 7, wherein the analysis device generates character data from the captured image by OCR (Optical Character Recognition/Reader) processing.
  9.  前記キャプチャ画像が、複数項目のデータを表示するものである場合、
     前記解析装置は、OCR処理の結果得られた文字データの種類の並びが、同一項目における他のOCR処理の結果得られた並びと異なるとき、前記文字データを修正する修正部を備える、請求項8に記載の信号処理システム。
    If the captured image displays data of multiple items,
    The analyzing device further comprises a modification unit that modifies the character data when the arrangement of types of character data obtained as a result of OCR processing is different from the arrangement obtained as a result of other OCR processing for the same item. 8. The signal processing system according to 8.
  10.  第1映像信号を分配する第1分配器と、
     第2映像信号を分配する第2分配器と、を備え、
     前記キャプチャ部は、前記第1分配器で分配された映像信号、および前記第2分配器で分配された映像信号の双方から前記キャプチャ画像を生成する、請求項1~9のいずれか1項に記載の信号処理システム。
    a first distributor that distributes the first video signal;
    a second distributor that distributes the second video signal;
    The capture unit generates the captured image from both the video signal distributed by the first distributor and the video signal distributed by the second distributor. The signal processing system described.
  11.  前記キャプチャ部は、複数の前記映像信号の形式に関わらず、前記キャプチャ画像を同じ方式で作成する、請求項1~10のいずれか1項に記載の信号処理システム。 The signal processing system according to any one of claims 1 to 10, wherein the capture unit creates the captured images using the same method regardless of the format of the plurality of video signals.
  12.  解析対象となる画像を生成するための信号処理方法であって、
     表示装置に表示させる映像信号を分配する分配器を備え、
     前記分配器で前記映像信号を分配する分配ステップと、
     分配された前記映像信号から解析可能なキャプチャ画像を生成するキャプチャステップと、
     前記キャプチャ画像を、該キャプチャ画像を解析する解析装置へ出力する出力ステップと、を含む信号処理方法。
    A signal processing method for generating an image to be analyzed, the method comprising:
    Equipped with a distributor that distributes video signals to be displayed on a display device,
    a distribution step of distributing the video signal with the distributor;
    a capturing step of generating an analyzable captured image from the distributed video signal;
    A signal processing method comprising: outputting the captured image to an analysis device that analyzes the captured image.
PCT/JP2023/015725 2022-04-20 2023-04-20 Signal processing system and signal processing method WO2023204265A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022069579 2022-04-20
JP2022-069579 2022-04-20

Publications (1)

Publication Number Publication Date
WO2023204265A1 true WO2023204265A1 (en) 2023-10-26

Family

ID=88419957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/015725 WO2023204265A1 (en) 2022-04-20 2023-04-20 Signal processing system and signal processing method

Country Status (1)

Country Link
WO (1) WO2023204265A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09200875A (en) * 1996-01-17 1997-07-31 Toshiba Corp Remote operation device for monitor and control system
US5881172A (en) * 1996-12-09 1999-03-09 Mitek Systems, Inc. Hierarchical character recognition system
JP2007179526A (en) * 2005-12-02 2007-07-12 Toshiba Corp Remote monitoring control system and method
WO2008120376A1 (en) * 2007-03-29 2008-10-09 Pioneer Corporation Image processing device and method, and optical character identification device and method
JP2009175967A (en) * 2008-01-23 2009-08-06 Kansai Electric Power Co Inc:The Data collection device, system, method and program
JP2010152800A (en) * 2008-12-26 2010-07-08 Kddi Corp Image processing apparatus, image processing method and program
JP2016201013A (en) * 2015-04-13 2016-12-01 富士ゼロックス株式会社 Character recognition device, character recognition processing system, and program
JP2017011581A (en) * 2015-06-24 2017-01-12 株式会社Jストリーム Moving picture processing device and moving picture processing system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09200875A (en) * 1996-01-17 1997-07-31 Toshiba Corp Remote operation device for monitor and control system
US5881172A (en) * 1996-12-09 1999-03-09 Mitek Systems, Inc. Hierarchical character recognition system
JP2007179526A (en) * 2005-12-02 2007-07-12 Toshiba Corp Remote monitoring control system and method
WO2008120376A1 (en) * 2007-03-29 2008-10-09 Pioneer Corporation Image processing device and method, and optical character identification device and method
JP2009175967A (en) * 2008-01-23 2009-08-06 Kansai Electric Power Co Inc:The Data collection device, system, method and program
JP2010152800A (en) * 2008-12-26 2010-07-08 Kddi Corp Image processing apparatus, image processing method and program
JP2016201013A (en) * 2015-04-13 2016-12-01 富士ゼロックス株式会社 Character recognition device, character recognition processing system, and program
JP2017011581A (en) * 2015-06-24 2017-01-12 株式会社Jストリーム Moving picture processing device and moving picture processing system

Similar Documents

Publication Publication Date Title
US11900316B2 (en) Information processing apparatus, control method, and program
US6641269B2 (en) Indicated position detection by multiple resolution image analysis
EP1734456A1 (en) Learning type classification device and learning type classification method
US20070217687A1 (en) Display control method, and program, information processing apparatus and optical character recognizer
US4642813A (en) Electro-optical quality control inspection of elements on a product
KR101417696B1 (en) Pattern measuring method, pattern measuring apparatus, and recording medium
KR20060119968A (en) Apparatus and method for feature recognition
JP3382045B2 (en) Image projection system
CN112508033A (en) Detection method, storage medium, and electronic apparatus
US10931942B2 (en) Evaluation system and evaluation method
WO2023204265A1 (en) Signal processing system and signal processing method
US5309376A (en) Methods and apparatus for testing image processing apparatus
JP6408054B2 (en) Information processing apparatus, method, and program
JP6948294B2 (en) Work abnormality detection support device, work abnormality detection support method, and work abnormality detection support program
US20210201511A1 (en) Image processing apparatus, image processing method, and storage medium
CN106846302A (en) The detection method and the examination platform based on the method for a kind of correct pickup of instrument
US6340988B1 (en) Method and apparatus for displaying video data for testing a video board
CN112541881A (en) Electronic equipment failure analysis method and system
CN115280307A (en) Information processing apparatus, program, and information processing method
US20210281763A1 (en) Image processing apparatus and control method of the same, orientation adjustment system and non-transitory computer-readable medium storing program
CN113034430B (en) Video authenticity verification and identification method and system based on time watermark change analysis
WO2024190552A1 (en) Information processing device, information processing method, and program
US20060050967A1 (en) Image processing apparatus and program
JP3835302B2 (en) Image display method and apparatus
US12080057B2 (en) Image analysis apparatus, image analysis method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23791913

Country of ref document: EP

Kind code of ref document: A1