WO2022105507A1 - 文本录制视频清晰度检测方法、装置、计算机设备及存储介质 - Google Patents

文本录制视频清晰度检测方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022105507A1
WO2022105507A1 PCT/CN2021/124389 CN2021124389W WO2022105507A1 WO 2022105507 A1 WO2022105507 A1 WO 2022105507A1 CN 2021124389 W CN2021124389 W CN 2021124389W WO 2022105507 A1 WO2022105507 A1 WO 2022105507A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
frame
text
definition
detected
Prior art date
Application number
PCT/CN2021/124389
Other languages
English (en)
French (fr)
Inventor
王家桢
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2022105507A1 publication Critical patent/WO2022105507A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a method, device, computer equipment and storage medium for detecting the clarity of text recording and video.
  • the review of whether the text material recording is focused is carried out manually, and it is necessary to manually review the entire video to gradually watch it, which is time-consuming and labor-intensive.
  • the purpose of the embodiments of the present application is to provide a method, device, computer equipment and storage medium for detecting the clarity of text recorded video, so as to solve the problem of time-consuming and laborious manual review of text recorded video.
  • the embodiment of the present application provides a method for detecting the clarity of a text recorded video, which adopts the following technical solutions:
  • N is a positive integer greater than 1;
  • the definition of the text recording video segment to be detected is determined according to the frame definition of each frame.
  • the embodiment of the present application also provides a text recording video clarity detection device, which adopts the following technical solutions:
  • the acquisition module is used to acquire the service recording video
  • An interception module configured to calculate the ambiguity curve of the service recording video, intercept the service recording video according to the ambiguity curve, and obtain the text recording video segment to be detected;
  • an extraction module for extracting N video frames in the text recording video clip to be detected, where N is a positive integer greater than 1;
  • a processing module configured to input the N video frames into the OCR-based character recognition model, obtain the character recognition result of each frame in the N video frames, and judge the frame of each frame according to the character recognition result clarity;
  • a judging module configured to judge the definition of the text recording video segment to be detected according to the frame definition of each frame.
  • an embodiment of the present application further provides a computer device, including a memory and a processor, wherein the memory stores computer-readable instructions, and the processor implements the following steps when executing the computer-readable instructions:
  • N is a positive integer greater than 1;
  • the definition of the text recording video segment to be detected is determined according to the frame definition of each frame.
  • the embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the following steps are implemented:
  • N is a positive integer greater than 1;
  • the definition of the text recording video segment to be detected is determined according to the frame definition of each frame.
  • the embodiments of the present application mainly have the following beneficial effects: recording a video by acquiring a service; calculating an ambiguity curve of the service recording video, intercepting the service recording video according to the ambiguity curve, and obtaining a The detected text recording video clip; extracting N video frames in the text recording video clip to be detected, wherein N is a positive integer greater than 1; inputting the N video frames into the OCR-based text recognition model, Obtain the text recognition result of each frame in the N video frames, and judge the frame definition of each frame according to the text recognition result; judge the to-be-detected text recording video clip according to the frame definition of each frame clarity.
  • the clarity of text-recorded video clips does not need to be detected by human eyes watching the video, which saves time and effort and is more efficient.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • Fig. 2 is a flow chart of an embodiment of a text recording video clarity detection method according to the present application
  • Fig. 3 is the ambiguity curve schematic diagram of the service recording video
  • FIG. 4 is a schematic structural diagram of an embodiment of a text recording video clarity detection device according to the present application.
  • FIG. 5 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various communication client applications may be installed on the terminal devices 101 , 102 and 103 , such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, and the like.
  • the terminal devices 101, 102, and 103 can be various electronic devices that have a display screen and support web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, Laptops and Desktops, etc.
  • MP3 players Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
  • the server 105 may be a server that provides various services, such as a background server that provides support for the pages displayed on the terminal devices 101 , 102 , and 103 .
  • the text recording video definition detection method provided by the embodiment of the present application is generally performed by a server/terminal device, and correspondingly, the text recording video definition detection apparatus is generally set in the server/terminal device.
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the described text recording video clarity detection method includes the following steps:
  • Step S201 acquiring service recording video.
  • the electronic device for example, the server/terminal device shown in FIG. 1
  • the text recording video definition detection method runs can receive the text recording video segment to be detected through wired connection or wireless connection.
  • the above wireless connection methods may include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods currently known or developed in the future .
  • Step S202 Calculate the ambiguity curve of the video recorded by the service, and intercept the video recorded by the service according to the ambiguity curve to obtain the text recording video segment to be detected.
  • the clipped segment is determined according to the ambiguity curve characteristics of the video recorded by the real scene service.
  • the video recording of real scene business usually includes the following processes: face image recording -> go to text material -> text material recording -> go to face -> face image recording, in which the process of going to text material and going to The video recording device for the process of the face is not focused, and the captured video is blurred, so the characteristics of the change of the blurriness curve of the video recorded by the real business are clear -> blurred -> clear -> blurred -> clear.
  • the service recording video is intercepted at the time segment corresponding to the second clear line segment on the ambiguity curve, and the text recording video segment to be detected can be obtained.
  • the D(f) function is defined as follows:
  • f(x, y) represents the gray value of the pixel point (x, y) corresponding to the image f
  • D(f) is the result of the image definition calculation.
  • the intercepted time period can be obtained more accurately, the redundant part can be effectively removed, and the efficiency of the sharpness detection of the text recorded video clips can be improved.
  • Step S203 extracting N video frames in the text recording video segment to be detected, where N is a positive integer greater than 1.
  • the video is composed of a series of continuously played images, and these continuously played images are video frames.
  • the frame rate of video played on the network is 30 frames per second, and the minimum can be reduced to 25 frames per second.
  • the video frame extraction is performed on the video, that is, the video frame sampling is performed on the video, which can reduce the amount of calculation and improve the processing efficiency.
  • the frame rate is 30 frames/second
  • the video has a total of 150 frames
  • a video frame is extracted every 10 frames, and a total of 15 images are obtained.
  • the images are evenly distributed on the time axis, which reduces the amount of calculation and can truly reflect the clarity of the video.
  • Step S204 inputting the N video frames into the OCR-based character recognition model, obtaining the character recognition result of each frame in the N video frames, and judging the frame definition of each frame according to the character recognition result .
  • the extracted N video frames are input into the OCR-based character recognition model, and the character recognition result of each video frame is obtained.
  • OCR Optical Character Recognition
  • OCR-based text recognition models can be implemented by general-purpose software.
  • each frame is clear or not according to the text recognition result of each video frame.
  • the video frame can be recognized by the character recognition model based on OCR, the video frame is considered to be clear, otherwise, the video frame is considered to be blurred.
  • Step S205 judging the definition of the text recording video segment to be detected according to the frame definition of each frame.
  • the number of video frames judged to be clear is calculated, and the ratio of this value to the total number of extracted video frames is calculated, and the obtained ratio is compared with the preset value.
  • the ratio is greater than the set threshold, it is judged that the text recording video to be detected is clear.
  • the present application obtains the service recording video; calculates the ambiguity curve of the service recording video, intercepts the service recording video according to the ambiguity curve, and obtains the text recording video segment to be detected; extracts the text recording to be detected.
  • N video frames in the video clip where N is a positive integer greater than 1; the N video frames are input into the OCR-based text recognition model, and the text recognition results of each frame in the N video frames are obtained,
  • the frame definition of each frame is determined according to the text recognition result; the definition of the text recording video segment to be detected is determined according to the frame definition of each frame.
  • the clarity of text-recorded video clips does not need to be detected by human eyes watching the video, which saves time and effort and is more efficient.
  • step S201 the following steps are included:
  • Perform text conversion on the audio obtain a text conversion result, compare the text conversion result with a preset first keyword and a second keyword, and obtain the first keyword and the second keyword in a first time point and a second time point when the audio first appears;
  • a ambiguity curve of the first video segment is calculated, and according to the ambiguity curve, the first video segment is intercepted to obtain a text recording video segment to be detected.
  • the real business scene recording not only includes video recording, but also audio recording synchronously.
  • Video recording usually includes not only text recording video clips but also face recording parts.
  • the interception service is determined by the time point when the keywords in the audio file recorded in synchronization with the service recording video first appear. The time period during which the video is recorded. For example, in the process of serving customers, the salesman usually says "please read” when he starts showing the text material, and usually says “reading” when he ends the presentation.
  • put “please read” and “reading completed” are set as the first keyword and the second keyword respectively, convert the audio into text through a general speech-to-text software, compare the text conversion result with the first keyword and the second keyword, and obtain the first keyword.
  • the service recording video is intercepted at the first time point and the second time point to obtain the first video segment.
  • the first keyword is set, and the time point at which the first keyword first appears in the audio is used as the start time, and the service recording video is intercepted with a set duration, for example, the interception duration is set to 5 seconds, Obtaining the first video clip can also play the role of removing redundant parts and retaining only the text-recorded video clip.
  • the blur degree curve of the first video clip is calculated, the blurred part in the first video clip is removed, and the redundancy is further removed, thereby improving the efficiency and accuracy of the sharpness detection of the text recording video clip.
  • step S203 the following steps are included:
  • L video frame subsets are extracted from the video frame set according to a set interval, where L is a positive integer greater than 1, wherein the video frame subsets are temporally adjacent M video frames in the video frame set constitute, M is a positive integer greater than 1.
  • the text recording video to be detected is an important element of post-event supervision, it is required that the focus time when recording the text material lasts for a certain period of time, which is convenient for the human eye to recognize after the event.
  • a uniform interval is used for each extraction time.
  • the whole video segment has 300 frames, and M frames are continuously sampled every fixed number of frames, where M is a positive integer greater than 1, for example, 5 frames are sampled every 20 frames.
  • the recognition of the human eye takes a certain length of time to make the sharpness detection more accurate.
  • step S205 includes the following steps:
  • Step S301 judging the definition of each video frame subset according to the frame definition of each frame
  • Step S302 Determine the definition of the text recording video segment to be detected according to the definition of each video frame subset.
  • the clarity of the extracted video frame subsets is first judged according to the frame definition of each frame, and can be judged by calculating the ratio of the number of clear video frames in the video frame subset to the total number of video frames in the video frame subset, The ratio is compared with the set threshold, and if the ratio is greater than the set threshold, it is judged that the video frame subset is clear, otherwise, the video frame subset is judged to be blurry.
  • step S302 includes the following steps:
  • each video frame subset calculate the ratio of the number of clear video frame subsets in each video frame subset to the total number L of the extracted video frame subsets;
  • the ratio is compared with a preset first threshold, and when the ratio is greater than the first threshold, it is determined that the text recording video segment to be detected is clear.
  • the definition of the text recording video segment to be detected is determined by calculating the ratio of the number of clear video frame subsets to the total number of extracted video frame subsets. When the ratio is greater than the preset first threshold, it is determined that the text recording video segment to be detected is clear; otherwise, it is determined that the text recording video segment to be detected is fuzzy.
  • the video frame subset is composed of multiple temporally adjacent video frames, which simulates the factor that human eye recognition requires a certain length of time, and can avoid the deviation between computer judgment and human eye recognition.
  • step S204 includes the following steps:
  • the number of characters is compared with a preset second threshold, and when the number of characters is greater than the second threshold, it is determined that the corresponding video frame is clear.
  • the corresponding video frame is clear by calculating the number of characters contained in the text recognition result of each frame.
  • the set second threshold is 20.
  • the threshold of the number of characters is set to determine whether the video frame is clear, and the determination result is more objective and accurate.
  • the text recording video clarity detection method in this application relates to the field of artificial intelligence; in addition, this application can also be applied to the field of financial technology.
  • the present application may be used in numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, handheld or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, including A distributed computing environment for any of the above systems or devices, and the like.
  • the application may be described in the general context of computer-executable instructions, such as computer-readable instruction modules, being executed by a computer.
  • modules of computer-readable instructions include routines, computer-readable instructions, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • modules of computer readable instructions may be located in both local and remote computer storage media including storage devices.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) or the like.
  • the present application provides an embodiment of a text recording video clarity detection device, and the device embodiment corresponds to the method embodiment shown in FIG. 2 , Specifically, the device can be applied to various electronic devices.
  • the apparatus 400 for detecting the clarity of text recorded video in this embodiment includes: an acquisition module 401 , an extraction module 403 , a processing module 404 , and a judgment module 405 . in:
  • an obtaining module 401 configured to obtain a service recording video
  • An interception module 402 configured to calculate an ambiguity curve of the service recording video, intercept the service recording video according to the ambiguity curve, and obtain a text recording video segment to be detected;
  • Extraction module 403 for extracting N video frames in the text recording video clip to be detected, wherein N is a positive integer greater than 1;
  • the processing module 404 is configured to input the N video frames into the OCR-based character recognition model, obtain the character recognition result of each frame in the N video frames, and judge the character recognition of each frame according to the character recognition result. frame resolution;
  • the judgment module 405 is configured to judge the definition of the text recording video segment to be detected according to the frame definition of each frame.
  • the service recording video is acquired; the ambiguity curve of the service recording video is calculated, and the service recording video is intercepted according to the ambiguity curve to obtain the text recording video segment to be detected;
  • the detected text records N video frames in the video clip, where N is a positive integer greater than 1; the N video frames are input into the OCR-based text recognition model, and the N video frames of the N video frames are obtained.
  • the frame definition of each frame is judged according to the text recognition result; the definition of the to-be-detected text recording video segment is judged according to the frame definition of each frame.
  • the clarity of text-recorded video clips does not need to be detected by human eyes watching the video, which saves time and effort and is more efficient.
  • the device for detecting the clarity of text recorded video further includes:
  • the first acquisition sub-module is used to acquire the audio synchronized with the recorded video of the service
  • the first processing submodule is used to perform text conversion on the audio, obtain a text conversion result, compare the text conversion result with a preset first keyword and a second keyword, and obtain the first keyword and a first time point and a second time point at which the second keyword first appears in the audio;
  • a first interception submodule configured to intercept the service recording video according to the time period formed by the first time point and the second time point to obtain a first video segment
  • the second interception sub-module is configured to calculate the ambiguity curve of the first video segment, intercept the first video segment according to the ambiguity curve, and obtain the text recording video segment to be detected.
  • the extraction module 403 includes:
  • a first parsing submodule for parsing the text recording video clip to be detected into a video frame set
  • the first extraction sub-module is configured to extract L video frame subsets from the video frame set according to a set interval, where L is a positive integer greater than 1, wherein the video frame subset is the time in the video frame set It is composed of M adjacent video frames above, where M is a positive integer greater than 1.
  • the judgment module 405 includes:
  • a second processing submodule configured to determine the definition of each video frame subset according to the frame definition of each frame
  • the first judgment submodule is configured to judge the definition of the text recording video segment to be detected according to the definition of each video frame subset.
  • the first judgment submodule includes:
  • a first calculation subunit configured to calculate the ratio of the number of clear video frame subsets in each video frame subset to the total number L of the extracted video frame subsets according to the definition of each video frame subset;
  • a first judging subunit configured to compare the ratio with a preset first threshold, and when the ratio is greater than the first threshold, determine that the text recording video segment to be detected is clear.
  • the processing module 404 includes:
  • the first calculation submodule is used to calculate the number of characters included in the character recognition results of the respective frames
  • the second judgment sub-module is configured to compare the number of characters with a preset second threshold, and when the number of characters is greater than the second threshold, determine that the corresponding video frame is clear.
  • FIG. 5 is a block diagram of a basic structure of a computer device according to this embodiment.
  • the computer device 5 includes a memory 51 , a processor 52 , and a network interface 53 that communicate with each other through a system bus. It should be pointed out that only the computer device 5 with components 51-53 is shown in the figure, but it should be understood that it is not required to implement all of the shown components, and more or less components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
  • the memory 51 includes at least one type of computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium includes flash memory, hard disk, and multimedia card. , card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable Program read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 51 may be an internal storage unit of the computer device 5 , such as a hard disk or a memory of the computer device 5 .
  • the memory 51 may also be an external storage device of the computer device 5, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 51 may also include both the internal storage unit of the computer device 5 and its external storage device.
  • the memory 51 is generally used to store the operating system and various application software installed on the computer device 5 , such as computer-readable instructions for a method for detecting the resolution of a text recorded video.
  • the memory 51 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 52 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips. This processor 52 is typically used to control the overall operation of the computer device 5 . In this embodiment, the processor 52 is configured to execute the computer-readable instructions or process data stored in the memory 51, for example, the computer-readable instructions for executing the method for detecting the sharpness of the text recorded video.
  • CPU Central Processing Unit
  • controller a controller
  • microcontroller a microcontroller
  • microprocessor microprocessor
  • This processor 52 is typically used to control the overall operation of the computer device 5 .
  • the processor 52 is configured to execute the computer-readable instructions or process data stored in the memory 51, for example, the computer-readable instructions for executing the method for detecting the sharpness of the text recorded video.
  • the network interface 53 may include a wireless network interface or a wired network interface, and the network interface 53 is generally used to establish a communication connection between the computer device 5 and other electronic devices.
  • the service recording video is obtained; the ambiguity curve of the service recording video is calculated, and the service recording video is intercepted according to the ambiguity curve to obtain the text recording video segment to be detected; the text to be detected is extracted Record N video frames in the video clip, where N is a positive integer greater than 1; input the N video frames into the OCR-based text recognition model, and obtain the text recognition results of each frame in the N video frames , judging the frame definition of each frame according to the text recognition result; and judging the definition of the text recording video segment to be detected according to the frame definition of each frame.
  • the clarity of text-recorded video clips does not need to be detected by human eyes watching the video, which saves time and effort and is more efficient.
  • the present application also provides another embodiment, that is, to provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be executed by at least one processor to The at least one processor is caused to perform the steps of the above-mentioned method for detecting the sharpness of a text recorded video.
  • the service recording video is obtained; the ambiguity curve of the service recording video is calculated, and the service recording video is intercepted according to the ambiguity curve to obtain the text recording video segment to be detected; the text to be detected is extracted Record N video frames in the video clip, where N is a positive integer greater than 1; input the N video frames into the OCR-based text recognition model, and obtain the text recognition results of each frame in the N video frames , judging the frame definition of each frame according to the text recognition result; and judging the definition of the text recording video segment to be detected according to the frame definition of each frame.
  • the clarity of text-recorded video clips does not need to be detected by human eyes watching the video, which saves time and effort and is more efficient.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course hardware can also be used, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Abstract

本申请实施例属于人工智能领域,涉及一种文本录制视频清晰度检测方法,包括获取业务录制视频;计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。本申请还提供一种文本录制视频清晰度检测装置、计算机设备及存储介质。文本录制视频片段的清晰度无需通过人眼观看视频来检测,更省时省力,效率更高。

Description

文本录制视频清晰度检测方法、装置、计算机设备及存储介质
本申请要求于2020年11月17日提交中国专利局、申请号为202011286396.2,发明名称为“文本录制视频清晰度检测方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及文本录制视频清晰度检测方法、装置、计算机设备及存储介质。
背景技术
保险、证券、银行等金融业务的开展过程中,对业务规范性要求高,为尽可能减少事后的纠纷,以及为事后提供监督要素,中国银保监会、证监会制定了行业规范,要求业务员在为客户提供金融服务过程中,应对关键环节同步录音录像。录音录像过程中,不仅仅需要对人脸图像进行录制,在许多关键环节需要业务员对着镜头向客户现场展示一些重要文本材料,作为事后监督的监督要素。
发明人意识到,真实的业务场景中,业务员可能对文本材料没有很好的聚焦,造成该视频录制不符合要求,不能作为事后监督的素材,需要重新录制。目前文本材料录制是否聚焦的审核是由人工进行的,需要人工对整个视频逐步观看进行审核,费时费力。
发明内容
本申请实施例的目的在于提出一种文本录制视频清晰度检测方法、装置、计算机设备及存储介质,以解决人工审核文本录制视频费时费力的问题。
为了解决上述技术问题,本申请实施例提供一种文本录制视频清晰度检测方法,采用了如下所述的技术方案:
获取业务录制视频;
计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
为了解决上述技术问题,本申请实施例还提供一种文本录制视频清晰度检测装置,采用了如下所述的技术方案:
获取模块,用于获取业务录制视频;
截取模块,用于计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
抽取模块,用于抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
处理模块,用于将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
判断模块,用于根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
为了解决上述技术问题,本申请实施例还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取业务录制视频;
计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下步骤:
获取业务录制视频;
计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
与现有技术相比,本申请实施例主要有以下有益效果:通过获取业务录制视频;计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。文本录制视频片段的清晰度无需通过人眼观看视频来检测,更省时省力,效率更高。
附图说明
为了更清楚地说明本申请中的方案,下面将对本申请实施例描述中所需要使用的附图作一个简单介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请可以应用于其中的示例性系统架构图;
图2根据本申请的文本录制视频清晰度检测方法的一个实施例的流程图;
图3是业务录制视频的模糊度曲线示意图;
图4是根据本申请的文本录制视频清晰度检测装置的一个实施例的结构示意图;
图5是根据本申请的计算机设备的一个实施例的结构示意图。
具体实施方式
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
为了使本技术领域的人员更好地理解本申请方案,下面将结合附图,对本申请实施例 中的技术方案进行清楚、完整地描述。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的页面提供支持的后台服务器。
需要说明的是,本申请实施例所提供的文本录制视频清晰度检测方法一般由服务器/终端设备执行,相应地,文本录制视频清晰度检测装置一般设置于服务器/终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本申请的文本录制视频清晰度检测的方法的一个实施例的流程图。所述的文本录制视频清晰度检测方法,包括以下步骤:
步骤S201,获取业务录制视频。
在本实施例中,文本录制视频清晰度检测方法运行于其上的电子设备(例如图1所示的服务器/终端设备)可以通过有线连接方式或者无线连接方式接收待检测的文本录制视频片段。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。
通过带摄像头的电子设备拍摄业务录制视频片段。
步骤S202,计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段。
在本实施例中,根据真实场景业务录制视频的模糊度曲线特点,确定截取的片段。真实场景业务录制视频,通常会包括以下过程:人脸图像录制->转到文本材料->文本材料录制->转到人脸->人脸图像录制,其中转到文本材料的过程和转到人脸的过程录像设备没有对焦,拍到的视频是模糊的,所以真实业务录制视频的模糊度曲线变化的特点为清晰->模糊->清晰->模糊->清晰。以模糊度曲线上第二段清晰的线段对应的时间段截取业务录制视频,即可获得待检测的文本录制视频片段。
通过使用Brenner梯度函数D(f)计算相邻的两个像素灰度差的平方,来获得业务录制视频的模糊度曲线。D(f)函数定义如下:
D(f)=∑ yx|f(x+2,y)-f(x,y)| 2
中f(x,y)表示图像f所对应的像素点(x,y)的灰度值,D(f)为图像清晰度计算的结果。
根据业务录制视频的特点,其模糊度曲线大致如图3所示,其中t1-t2这一时间段的视频片段即为待检测的文本录制视频片段。
通过业务录制视频的模糊度曲线特点,可以更准确的获得截取的时间段,有效的去除了冗余部分,提高了文本录制视频片段清晰度检测的效率。
步骤S203,抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数。
在本实施例中,视频由一系列连续播放的图像构成,这些连续播放的图像就是视频帧。通常网络上播放的视频帧率为30帧/秒,最低可以降到25帧/秒。对视频进行视频帧抽取,即对视频进行视频帧抽样,可以减少计算量,提高处理效率。
例如检测一个5秒时长的视频,帧率为30帧/秒,该视频共150帧,每10帧抽取一个视频帧,共得到15幅图像,后续只需要对15幅图像进行处理,且15幅图像在时间轴上均匀分布,减少了计算量的同时也能真实体现视频的清晰度。
步骤S204,将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度。
在本实施例中,将抽取的N个视频帧输入到基于OCR的文字识别模型中,获得每个视频帧的文字识别结果。OCR(Optical Character Recognition,光学字符识别),是指电子设备(例如扫描仪或数码相机)检查纸上打印的字符,通过检测暗、亮的模式确定其形状,然后用字符识别方法将形状翻译成计算机文字的过程。基于OCR的文字识别模型可以通过通用软件实现。
根据各个视频帧的文字识别结果分别判断各帧是否清晰。当视频帧通过基于OCR的文字识别模型可以识别出字符,则认为该视频帧清晰,否则,认为该视频帧模糊。
步骤S205,根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
在本实施例中,根据步骤S204中得到的各帧的帧清晰度,计算判断为清晰的视频帧的个数,并计算这个数值与抽取的视频帧总数的比值,将得到比值与预先设定的阈值比较,比值大于设定的阈值时,判断待检测的文本录制视频清晰。
本申请通过获取业务录制视频;计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。文本录制视频片段的清晰度无需通过人眼观看视频来检测,更省时省力,效率更高。
在本实施例的一些可选的实现方式中,步骤S201之后,包括以下步骤:
获取与所述业务录制视频同步的音频;
将所述音频进行文字转换,获取文字转换结果,将所述文字转换结果与预设的第一关键词和第二关键词比对,获取所述第一关键词和所述第二关键词在所述音频中首次出现的第一时间点和第二时间点;
根据所述第一时间点和所述第二时间点构成的时间段截取所述业务录制视频,获得第一视频片段;
计算所述第一视频片段的模糊度曲线,根据所述模糊度曲线,截取所述第一视频片段,获得待检测的文本录制视频片段。
上述实施方式,真实的业务场景录制不仅包括视频录制,同步会录制音频。视频录制通常不只包含文本录制视频片段还包含人脸录制部分,为了将文本录制视频片段更准确的分离出来,通过与业务录制视频同步录制的音频文件中的关键词首次出现的时间点确定截取业务录制视频的时间段,例如,业务员在为客户服务过程中,开始展示文本材料时通常会说“请阅读”,在结束展示时,通常会说“阅读完毕”,这里,把“请阅读”和“阅读完毕”分别设为第一关键词和第二关键词,通过通用的语音转文字软件将音频转为文字,将文字转换结果与第一关键词和第二关键词比对,获得第一关键词和第二关键词在音频中首次出现的第一时间点和第二时间点,以第一时间点和第二时间点截取业务录制视频,获得第一视频片段。
在一些实施方式中,只设定第一关键词,以音频中首次出现第一关键词的时间点为开始时间,以设定时长对业务录制视频进行截取,例如设定截取时长为5秒,获得第一视频 片段,也可以起到去除冗余部分,只保留文本录制视频片段的作用。
再计算第一视频片段的模糊度曲线,去除第一视频片段中模糊的部分,进一步的去除冗余,提高了文本录制视频片段清晰度检测的效率和准确性。
在本实施例的一些可选的实现方式中,步骤S203中,包括以下步骤:
将所述待检测的文本录制视频片段解析为视频帧集;
按照设定的间隔从所述视频帧集中抽取L个视频帧子集,L为大于1的正整数,其中,所述视频帧子集为所述视频帧集中时间上相邻的M个视频帧构成,M为大于1的正整数。
上述实施方式,因为待检测的文本录制视频作为事后监督的重要要素,要求对文本材料录制时对焦时间持续一定时间,方便事后人眼可以识别。为避免抽取方式可能会引入的清晰度检测与人眼识别存在偏差的问题,也就是说通过计算机检测为清晰,而人眼不容易识别的问题,采用均匀间隔每次抽取时间上相邻的多个视频帧的方式抽样,例如整个视频片段300帧,每隔固定帧数连续的抽取M帧,M为大于1的正整数,例如每隔20帧抽取5帧。
通过每次连续的抽取多个视频帧方式,模拟人眼的识别需要一定时长这一因素,使清晰度检测更准确。
在本实施例的一些可选的实现方式中,步骤S205中,包括以下步骤:
步骤S301,根据所述各帧的帧清晰度判断所述各视频帧子集清晰度;
步骤S302,根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度。
上述实施方式,先根据各帧的帧清晰度判断抽取的各视频帧子集的清晰度,可以通过计算视频帧子集中清晰的视频帧个数占视频帧子集中视频帧总数的比来判断,将比值与设定的阈值比较,大于设定的阈值,判断该视频帧子集清晰,否则,判断该视频帧子集模糊。
在本实施例的一些可选的实现方式中,步骤S302中,包括以下步骤:
根据所述各视频帧子集清晰度,计算所述各视频帧子集中清晰的视频帧子集个数占所述抽取的视频帧子集总数L的比值;
将所述比值与预设的第一阈值比较,当所述比值大于所述第一阈值时,判断所述待检测的文本录制视频片段清晰。
上述实施方式,通过计算清晰的视频帧子集个数占抽取的视频帧子集总数的比值来判断所述待检测的文本录制视频片段的清晰度。当比值大于预设的第一阈值时,判断待检测的文本录制视频片段清晰,否则,判断待检测的文本录制视频片段模糊。
视频帧子集由时间上相邻的多个视频帧构成,模拟了人眼识别需要一定时长这一因素,可以避免计算机判断与人眼识别的偏差。
在本实施例的一些可选的实现方式中,步骤S204中,包括以下步骤:
分别计算所述各帧的文字识别结果中包含的字符数;
将所述字符数与预设的第二阈值比较,当所述字符数大于所述第二阈值时,判断对应的视频帧清晰。
上述实施方式,通过计算各帧的文字识别结果中包含的字符数来判断对应的视频帧是否清晰,例如设定的第二阈值为20,当一个视频帧通过基于OCR的文字识别模型识别出来的字符个数大于或等于20,判断该视频帧清晰,否则,判断该视频帧模糊。
本实施方式通过设定字符数阈值来判断视频帧是否清晰,判断结果更加客观、准确。
本申请中的文本录制视频清晰度检测方法涉及人工智能领域;此外,本申请还可以应用于金融科技领域。
本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。本申请可以在由计算机执行的计算机可执行指令的一般上 下文中描述,例如计算机可读指令模块。一般地,计算机可读指令模块包括执行特定任务或实现特定抽象数据类型的例程、计算机可读指令、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,计算机可读指令模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,该计算机可读指令可存储于一计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
进一步参考图4,作为对上述图2所示方法的实现,本申请提供了一种文本录制视频清晰度检测装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图4所示,本实施例所述的文本录制视频清晰度检测装置400包括:获取模块401、抽取模块403、处理模块404以及判断模块405。其中:
获取模块401,用于获取业务录制视频;
截取模块402,用于计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
抽取模块403,用于抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
处理模块404,用于将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
判断模块405,用于根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
在本实施例中,通过获取业务录制视频;计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。文本录制视频片段的清晰度无需通过人眼观看视频来检测,更省时省力,效率更高。
在本实施例的一些可选的实现方式中,所述的文本录制视频清晰度检测装置还包括:
第一获取子模块,用于获取与所述业务录制视频同步的音频;
第一处理子模块,用于将所述音频进行文字转换,获取文字转换结果,将所述文字转换结果与预设的第一关键词和第二关键词比对,获取所述第一关键词和所述第二关键词在所述音频中首次出现的第一时间点和第二时间点;
第一截取子模块,用于根据所述第一时间点和所述第二时间点构成的时间段截取所述业务录制视频,获得第一视频片段;
第二截取子模块,用于计算所述第一视频片段的模糊度曲线,根据所述模糊度曲线,截取所述第一视频片段,获得待检测的文本录制视频片段。
在本实施例的一些可选的实现方式中,抽取模块403包括:
第一解析子模块,用于将所述待检测的文本录制视频片段解析为视频帧集;
第一抽取子模块,用于按照设定的间隔从所述视频帧集中抽取L个视频帧子集,L为大于1的正整数,其中,所述视频帧子集为所述视频帧集中时间上相邻的M个视频帧构成,M为大于1的正整数。
在本实施例的一些可选的实现方式中,判断模块405包括:
第二处理子模块,用于根据所述各帧的帧清晰度判断所述各视频帧子集清晰度;
第一判断子模块,用于根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度。
进一步的,第一判断子模块包括:
第一计算子单元,用于根据所述各视频帧子集清晰度,计算所述各视频帧子集中清晰的视频帧子集个数占所述抽取的视频帧子集总数L的比值;
第一判断子单元,用于将所述比值与预设的第一阈值比较,当所述比值大于所述第一阈值时,判断所述待检测的文本录制视频片段清晰。
在本实施例的一些可选的实现方式中,处理模块404包括:
第一计算子模块,用于分别计算所述各帧的文字识别结果中包含的字符数;
第二判断子模块,用于将所述字符数与预设的第二阈值比较,当所述字符数大于所述第二阈值时,判断对应的视频帧清晰。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图5,图5为本实施例计算机设备基本结构框图。
所述计算机设备5包括通过系统总线相互通信连接存储器51、处理器52、网络接口53。需要指出的是,图中仅示出了具有组件51-53的计算机设备5,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
所述存储器51至少包括一种类型的计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器51可以是所述计算机设备5的内部存储单元,例如该计算机设备5的硬盘或内存。在另一些实施例中,所述存储器51也可以是所述计算机设备5的外部存储设备,例如该计算机设备5上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器51还可以既包括所述计算机设备5的内部存储单元也包括其外部存储设备。本实施例中,所述存储器51通常用于存储安装于所述计算机设备5的操作系统和各类应用软件,例如文本录制视频清晰度检测方法的计算机可读指令等。此外,所述存储器51还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器52在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器52通常用于控制所述计算机设备5的总体操作。本实施例中,所述处理器52用于运行所述存储器51中存储的计算机可 读指令或者处理数据,例如运行所述文本录制视频清晰度检测方法的计算机可读指令。
所述网络接口53可包括无线网络接口或有线网络接口,该网络接口53通常用于在所述计算机设备5与其他电子设备之间建立通信连接。
本实施例通过获取业务录制视频;计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。文本录制视频片段的清晰度无需通过人眼观看视频来检测,更省时省力,效率更高。
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行如上述的文本录制视频清晰度检测方法的步骤。
本实施例通过获取业务录制视频;计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。文本录制视频片段的清晰度无需通过人眼观看视频来检测,更省时省力,效率更高。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种文本录制视频清晰度检测方法,包括下述步骤:
    获取业务录制视频;
    计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
    抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
    将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
    根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
  2. 根据权利要求1所述的文本录制视频清晰度检测方法,其中,在所述获取业务录制视频的步骤之后包括:
    获取与所述业务录制视频同步的音频;
    将所述音频进行文字转换,获取文字转换结果,将所述文字转换结果与预设的第一关键词和第二关键词比对,获取所述第一关键词和所述第二关键词在所述音频中首次出现的第一时间点和第二时间点;
    根据所述第一时间点和所述第二时间点构成的时间段截取所述业务录制视频,获得第一视频片段;
    计算所述第一视频片段的模糊度曲线,根据所述模糊度曲线,截取所述第一视频片段,获得待检测的文本录制视频片段。
  3. 根据权利要求1所述的文本录制视频清晰度检测方法,其中,所述抽取所述待检测的文本录制视频片段中的N个视频帧的步骤包括:
    将所述待检测的文本录制视频片段解析为视频帧集;
    按照设定的间隔从所述视频帧集中抽取L个视频帧子集,L为大于1的正整数,其中,所述视频帧子集为所述视频帧集中时间上相邻的M个视频帧构成,M为大于1的正整数。
  4. 根据权利要求3所述的文本录制视频清晰度检测方法,其中,所述根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度的步骤包括:
    根据所述各帧的帧清晰度判断所述各视频帧子集清晰度;
    根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度。
  5. 根据权利要求4所述的文本录制视频清晰度检测方法,其中,所述根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度的步骤包括:
    根据所述各视频帧子集清晰度,计算所述各视频帧子集中清晰的视频帧子集个数占所述抽取的视频帧子集总数L的比值;
    将所述比值与预设的第一阈值比较,当所述比值大于所述第一阈值时,判断所述待检测的文本录制视频片段清晰。
  6. 根据权利要求1所述的文本录制视频清晰度检测方法,其中,所述将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度的步骤包括:
    分别计算所述各帧的文字识别结果中包含的字符数;
    将所述字符数与预设的第二阈值比较,当所述字符数大于所述第二阈值时,判断对应的视频帧清晰。
  7. 一种文本录制视频清晰度检测装置,包括:
    获取模块,用于获取业务录制视频;
    截取模块,用于计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
    抽取模块,用于抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
    处理模块,用于将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N 个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
    判断模块,用于根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
  8. 根据权利要求7所述的文本录制视频清晰度检测装置,其中,还包括:
    第一获取子模块,用于获取与所述业务录制视频同步的音频;
    第一处理子模块,用于将所述音频进行文字转换,获取文字转换结果,将所述文字转换结果与预设的第一关键词和第二关键词比对,获取所述第一关键词和所述第二关键词在所述音频中首次出现的第一时间点和第二时间点;
    第一截取子模块,用于根据所述第一时间点和所述第二时间点构成的时间段截取所述业务录制视频,获得第一视频片段;
    第二截取子模块,用于计算所述第一视频片段的模糊度曲线,根据所述模糊度曲线,截取所述第一视频片段,获得待检测的文本录制视频片段。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取业务录制视频;
    计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
    抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
    将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
    根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
  10. 根据权利要求9所述的计算机设备,其中,所述获取业务录制视频的步骤之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取与所述业务录制视频同步的音频;
    将所述音频进行文字转换,获取文字转换结果,将所述文字转换结果与预设的第一关键词和第二关键词比对,获取所述第一关键词和所述第二关键词在所述音频中首次出现的第一时间点和第二时间点;
    根据所述第一时间点和所述第二时间点构成的时间段截取所述业务录制视频,获得第一视频片段;
    计算所述第一视频片段的模糊度曲线,根据所述模糊度曲线,截取所述第一视频片段,获得待检测的文本录制视频片段。
  11. 根据权利要求9所述的计算机设备,其中,所述抽取所述待检测的文本录制视频片段中的N个视频帧的步骤包括:
    将所述待检测的文本录制视频片段解析为视频帧集;
    按照设定的间隔从所述视频帧集中抽取L个视频帧子集,L为大于1的正整数,其中,所述视频帧子集为所述视频帧集中时间上相邻的M个视频帧构成,M为大于1的正整数。
  12. 根据权利要求11所述的计算机设备,其中,所述根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度的步骤包括:
    根据所述各帧的帧清晰度判断所述各视频帧子集清晰度;
    根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度。
  13. 根据权利要求12所述的计算机设备,其中,所述根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度的步骤包括:
    根据所述各视频帧子集清晰度,计算所述各视频帧子集中清晰的视频帧子集个数占所述抽取的视频帧子集总数L的比值;
    将所述比值与预设的第一阈值比较,当所述比值大于所述第一阈值时,判断所述待检测的文本录制视频片段清晰。
  14. 根据权利要求9所述的计算机设备,其中,所述将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度的步骤包括:
    分别计算所述各帧的文字识别结果中包含的字符数;
    将所述字符数与预设的第二阈值比较,当所述字符数大于所述第二阈值时,判断对应的视频帧清晰。
  15. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令;所述计算机可读指令被处理器执行时实现如下步骤:
    获取业务录制视频;
    计算所述业务录制视频的模糊度曲线,根据所述模糊度曲线,截取所述业务录制视频,获得待检测的文本录制视频片段;
    抽取所述待检测的文本录制视频片段中的N个视频帧,其中N为大于1的正整数;
    将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度;
    根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度。
  16. 根据权利要求15所述的计算机可读存储介质,其中,在所述获取业务录制视频的步骤之后,所述计算机可读指令被处理器执行时还实现如下步骤:
    获取与所述业务录制视频同步的音频;
    将所述音频进行文字转换,获取文字转换结果,将所述文字转换结果与预设的第一关键词和第二关键词比对,获取所述第一关键词和所述第二关键词在所述音频中首次出现的第一时间点和第二时间点;
    根据所述第一时间点和所述第二时间点构成的时间段截取所述业务录制视频,获得第一视频片段;
    计算所述第一视频片段的模糊度曲线,根据所述模糊度曲线,截取所述第一视频片段,获得待检测的文本录制视频片段。
  17. 根据权利要求15所述的计算机可读存储介质,其中,所述抽取所述待检测的文本录制视频片段中的N个视频帧的步骤包括:
    将所述待检测的文本录制视频片段解析为视频帧集;
    按照设定的间隔从所述视频帧集中抽取L个视频帧子集,L为大于1的正整数,其中,所述视频帧子集为所述视频帧集中时间上相邻的M个视频帧构成,M为大于1的正整数。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述根据所述各帧的帧清晰度判断所述待检测的文本录制视频片段的清晰度的步骤包括:
    根据所述各帧的帧清晰度判断所述各视频帧子集清晰度;
    根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度。
  19. 根据权利要求18所述的计算机可读存储介质,其中,所述根据所述各视频帧子集清晰度判断所述待检测的文本录制视频片段的清晰度的步骤包括:
    根据所述各视频帧子集清晰度,计算所述各视频帧子集中清晰的视频帧子集个数占所述抽取的视频帧子集总数L的比值;
    将所述比值与预设的第一阈值比较,当所述比值大于所述第一阈值时,判断所述待检测的文本录制视频片段清晰。
  20. 根据权利要求15所述的计算机可读存储介质,其中,所述将所述N个视频帧输入到基于OCR的文字识别模型中,获得所述N个视频帧中各帧的文字识别结果,根据所述文字识别结果判断所述各帧的帧清晰度的步骤包括:
    分别计算所述各帧的文字识别结果中包含的字符数;
    将所述字符数与预设的第二阈值比较,当所述字符数大于所述第二阈值时,判断对应的视频帧清晰。
PCT/CN2021/124389 2020-11-17 2021-10-18 文本录制视频清晰度检测方法、装置、计算机设备及存储介质 WO2022105507A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011286396.2A CN112419257A (zh) 2020-11-17 2020-11-17 文本录制视频清晰度检测方法、装置、计算机设备及存储介质
CN202011286396.2 2020-11-17

Publications (1)

Publication Number Publication Date
WO2022105507A1 true WO2022105507A1 (zh) 2022-05-27

Family

ID=74830915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124389 WO2022105507A1 (zh) 2020-11-17 2021-10-18 文本录制视频清晰度检测方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112419257A (zh)
WO (1) WO2022105507A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419257A (zh) * 2020-11-17 2021-02-26 深圳壹账通智能科技有限公司 文本录制视频清晰度检测方法、装置、计算机设备及存储介质
CN114926464B (zh) * 2022-07-20 2022-10-25 平安银行股份有限公司 在双录场景下的图像质检方法、图像质检装置及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846622A (zh) * 2017-10-27 2018-03-27 北京雷石天地电子技术有限公司 一种检测字幕清晰度的方法及装置
US20200084485A1 (en) * 2018-09-06 2020-03-12 International Business Machines Corporation Detecting minimum viable display resolution of media content using optical character recognition
CN111741356A (zh) * 2020-08-25 2020-10-02 腾讯科技(深圳)有限公司 双录视频的质检方法、装置、设备及可读存储介质
CN112419257A (zh) * 2020-11-17 2021-02-26 深圳壹账通智能科技有限公司 文本录制视频清晰度检测方法、装置、计算机设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968638B (zh) * 2011-08-31 2016-06-08 上海夏尔软件有限公司 基于关键字光学字符识别的影像清晰度判断的方法
CN109831665B (zh) * 2019-01-16 2022-07-08 深圳壹账通智能科技有限公司 一种视频质检方法、系统及终端设备
CN111683285B (zh) * 2020-08-11 2021-01-26 腾讯科技(深圳)有限公司 文件内容识别方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846622A (zh) * 2017-10-27 2018-03-27 北京雷石天地电子技术有限公司 一种检测字幕清晰度的方法及装置
US20200084485A1 (en) * 2018-09-06 2020-03-12 International Business Machines Corporation Detecting minimum viable display resolution of media content using optical character recognition
CN111741356A (zh) * 2020-08-25 2020-10-02 腾讯科技(深圳)有限公司 双录视频的质检方法、装置、设备及可读存储介质
CN112419257A (zh) * 2020-11-17 2021-02-26 深圳壹账通智能科技有限公司 文本录制视频清晰度检测方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN112419257A (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
CA3017647C (en) Optical character recognition in structured documents
CN110458918B (zh) 用于输出信息的方法和装置
WO2022105507A1 (zh) 文本录制视频清晰度检测方法、装置、计算机设备及存储介质
CN113382279B (zh) 直播推荐方法、装置、设备、存储介质以及计算机程序产品
CN112954450B (zh) 视频处理方法、装置、电子设备和存储介质
US10339373B1 (en) Optical character recognition utilizing hashed templates
WO2022089170A1 (zh) 字幕区域识别方法、装置、设备及存储介质
EP3175375A1 (en) Image based search to identify objects in documents
US20150278248A1 (en) Personal Information Management Service System
US20220172476A1 (en) Video similarity detection method, apparatus, and device
US9195896B2 (en) Methods and systems for image recognition
US11348254B2 (en) Visual search method, computer device, and storage medium
CN111754414B (zh) 一种图像处理方法、装置和用于图像处理的装置
US10963690B2 (en) Method for identifying main picture in web page
US10631050B2 (en) Determining and correlating visual context on a user device with user behavior using digital content on the user device
WO2022105120A1 (zh) 图片文字检测方法、装置、计算机设备及存储介质
CN113361462B (zh) 视频处理和字幕检测模型的方法及装置
CN111291758B (zh) 用于识别印章文字的方法和装置
CN111914850A (zh) 图片特征提取方法、装置、服务器和介质
CN115205555B (zh) 确定相似图像的方法、训练方法、信息确定方法及设备
CN112766285B (zh) 图像样本生成方法、装置和电子设备
CN114979742B (zh) 视频处理方法、装置、设备及存储介质
CN110704294B (zh) 用于确定响应时间的方法和装置
CN117406877A (zh) 图片处理方法及装置、基于图片的检索定位方法及装置
CN117523330A (zh) 模型训练方法、图像检索方法、设备及计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893656

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21893656

Country of ref document: EP

Kind code of ref document: A1