CN107333189B - Segmentation method and device for detecting video and storage medium - Google Patents

Segmentation method and device for detecting video and storage medium Download PDF

Info

Publication number
CN107333189B
CN107333189B CN201710637951.3A CN201710637951A CN107333189B CN 107333189 B CN107333189 B CN 107333189B CN 201710637951 A CN201710637951 A CN 201710637951A CN 107333189 B CN107333189 B CN 107333189B
Authority
CN
China
Prior art keywords
video
detection
detecting
terminal
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710637951.3A
Other languages
Chinese (zh)
Other versions
CN107333189A (en
Inventor
叶飞
何帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huishoubao Tech Co ltd
Original Assignee
Shenzhen Huishoubao Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huishoubao Tech Co ltd filed Critical Shenzhen Huishoubao Tech Co ltd
Priority to CN201710637951.3A priority Critical patent/CN107333189B/en
Publication of CN107333189A publication Critical patent/CN107333189A/en
Application granted granted Critical
Publication of CN107333189B publication Critical patent/CN107333189B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/30Administration of product recycling or disposal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0283Price estimation or determination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/24Arrangements for testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W90/00Enabling technologies or technologies with a potential or indirect contribution to greenhouse gas [GHG] emissions mitigation

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Signal Processing (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Sustainable Development (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a segmentation method for detecting a video, which is characterized by comprising the following steps: acquiring a shot video in a detection process aiming at a terminal to be detected, wherein the shot video comprises a time tag of a video picture; acquiring a time tag for detecting a detection item of a terminal to be detected; and segmenting the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture. The embodiment of the invention also provides a processing device and a storage medium for detecting the video. By the method, the device and the storage medium for detecting the video segmentation provided by the embodiment of the invention, a user can conveniently check the process of terminal detection.

Description

Segmentation method and device for detecting video and storage medium
Technical Field
The present invention belongs to the field of video technology, and in particular, to a method, an apparatus, and a storage medium for detecting video segmentation.
Background
Before the terminal is recycled, the terminal detection equipment is required to detect the terminal in advance so as to evaluate the value of the terminal, wherein the terminal can be a mobile phone, a tablet computer or intelligent wearable equipment and the like. At present, a common recycling method is to mail the terminal to a terminal recycling company or a third-party evaluation organization to evaluate the value of the terminal, and the terminal recycling company or the third-party evaluation organization sends an evaluation result back to a corresponding terminal owner. However, since the whole evaluation process has no corresponding record, the evaluation process is an invisible operation for the terminal owner, and the terminal owner wants to be able to know the detection process, so it is necessary to provide a method for recording the evaluation process to inform the terminal owner of the relevant detection process.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, and a storage medium for detecting video segmentation.
The embodiment of the invention provides a segmentation method for detecting a video, which is characterized by comprising the following steps:
acquiring a shot video in a detection process aiming at a terminal to be detected, wherein the shot video comprises a time tag of a video picture;
acquiring a time tag for detecting a detection item of a terminal to be detected;
and segmenting the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture.
The processing device for detecting video is characterized by comprising a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor realizes the segmentation method for detecting video when executing the computer program.
An embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the above-mentioned method for detecting video segmentation.
The video segmentation method, the video segmentation equipment and the video detection storage medium provided by the embodiment of the invention can facilitate a user to specifically know the detection process of a specific detection item according to needs.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a first flowchart of a method for detecting video segmentation according to an embodiment of the present invention;
fig. 2 is a second flowchart of a method for detecting video segmentation according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a third flow chart of a method for detecting video segmentation according to an embodiment of the present invention;
fig. 4 is a fourth flowchart illustrating a method for detecting video segmentation according to an embodiment of the present invention;
fig. 5 is a fifth flowchart illustrating a method for detecting video segmentation according to an embodiment of the present invention;
fig. 6 is a sixth flowchart illustrating a method for detecting video segmentation according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a processing device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a first flowchart of a method for detecting video segments according to an embodiment of the invention. The method for segmenting the detection video provided by the embodiment of the invention is used for processing the shot video in the terminal detection process so as to segment the shot video according to the detection items, wherein the segmentation is used for decomposing the video in the shooting process into at least two segments of small videos or distinguishing different progress ranges of the complete video through marking. The segmentation method for detecting the video comprises the following steps:
101, acquiring a shot video in the detection process of a terminal to be detected, wherein the shot video comprises a time label of a video picture.
The terminal of the embodiment of the present invention includes, but is not limited to, an electronic device such as a mobile phone, a tablet computer, a PDA (personal digital Assistant), an MP3, an MP4, and the like. The terminal to be detected refers to a terminal whose performance needs to be determined through detection, and the specific performance includes software performance, hardware performance and the like. Software capabilities include, but are not limited to, system software and application software related capabilities; the hardware performance includes but is not limited to the hardware performance of a touch screen, a terminal appearance, a camera and the like. If a terminal user needs to transact or evaluate the terminal of the terminal user, corresponding detection needs to be carried out aiming at the terminal, namely a process of detecting the performance parameters of the terminal. After detection, the value of the terminal can be evaluated according to the detection result.
In the detection process, a video is shot through a camera device so as to record the detection process of the terminal to be detected. The shot video can be High Definition (HD) or Standard Definition (SD) video, and the specific video can be source data video or video compressed by compression standards such as H.264, MPEG-2 and the like. The method for acquiring the shot video in the detection process of the terminal to be detected in the embodiment of the invention specifically comprises the steps that the equipment for implementing the method in the embodiment of the invention comprises a camera device, and the camera device included in the equipment is used for shooting the detection process of the terminal to be detected so as to acquire the shot video in the detection process of the terminal to be detected; or receiving a shot video of the detection process of the terminal to be detected shot by other camera devices, for example, receiving the shot video of the detection process of the terminal to be detected through a communication module.
The time tag of the video frame in the embodiment of the present invention is a tag used to indicate time information in the captured video in the detection process, for example, the video captured in the time period of 10:00 to 10:03 has corresponding time information, and forms a time tag of a corresponding video segment, that is, 10:00 is a start time tag, and 10:03 is an end time tag, that is, each video frame corresponds to a time tag corresponding to one piece of capture time information. The time label of the video picture can be displayed in the video interface or not.
102, acquiring a time tag for detecting a detection item of a terminal to be detected.
In the process of detecting the terminal to be detected, corresponding detection items, such as appearance detection items, can be formed as required to detect the appearance of the terminal to be detected so as to determine the appearance performance condition of the terminal to be detected; detecting a touch screen detection item to detect the touch screen performance of the terminal to be detected so as to determine the touch performance condition of the terminal to be detected; and detecting the camera performance of the terminal to be detected according to the camera detection item so as to determine the camera performance of the terminal to be detected.
In the process of detecting a terminal to be detected, corresponding detection items may be displayed for a detection person to select, specifically, one or more items in a detection item list may be displayed on a display screen, and a user operates a specific detection item in the detection item list through an input device such as a mouse, a keyboard, or a touch screen, for example, by clicking a corresponding operation icon or an operation frame, which is an interface element for operation, where the interface element includes, but is not limited to, an icon, a button, an operation frame, and the like. After the detection personnel operates on specific detection items in the detection item list, the device implementing the method corresponding to the embodiment of the invention can acquire the time labels corresponding to the operation, for example, the detection personnel starts the detection of the appearance detection items at 10:00 and terminates the detection of the appearance detection items at 10:03 minutes.
The time tag for detecting the detection items in the embodiment of the invention refers to the time range or time point information for detecting one or more detection items. For example, the time range may be 10:00 to 10:03, or the time point may be 10: 00.
103, segmenting the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture.
The method for segmenting the detection video provided by the embodiment of the invention is used for processing the shot video in the terminal detection process so as to segment the shot video according to the detection items, wherein the segmentation is used for decomposing the shot video in the shooting process into at least more than two video segments or distinguishing different progress ranges of the complete video through marking. The method comprises the steps that a shot video in the shooting process is decomposed into at least two video segments, namely, the complete video is decomposed into at least two separate video files; the different progress ranges of the complete video are distinguished by marking, namely, the video is marked on the progress of the video, for example, the progress bar is marked (by small dots, characters, images and the like) so as to distinguish the different progress ranges of the complete video.
And searching video information corresponding to the time label in the shot video according to the obtained time label for detection of the detection item, wherein the video information can be progress, video pictures and other information. For example, the time range corresponding to 10:00 to 10:03 is detected in the appearance detection item, and the video captured in the time range corresponding to 10:00 to 10:03 is also the video segment corresponding to the appearance detection item. The time point corresponding to 10:00 is the starting time of the appearance detection item detection, and the corresponding 10:00 is also the starting time of the video segment corresponding to the appearance detection item.
By the method and the device, a user can conveniently and quickly locate the wanted range in the shot video according to the needs of the user.
Fig. 2 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiment shown in fig. 1. The specific process of the embodiment of the invention is as follows:
acquiring a shot video in a detection process aiming at a terminal to be detected, wherein the shot video comprises a time tag of a video picture;
acquiring a time tag for detecting a detection item of a terminal to be detected;
and segmenting the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture.
The method provided by the embodiment of the invention also comprises the following steps:
a time stamp adjustment factor is obtained 201.
In the process of detecting a terminal to be detected, corresponding detection items are displayed for a detection person to select, specifically, one or more items in a detection item list may be displayed on a display screen, and a user operates a specific detection item in the detection item list through an input device such as a mouse, a keyboard, or a touch screen, for example, by clicking a corresponding operation icon or an operation frame, which is an interface element for operation, where the interface element includes, but is not limited to, an icon, a button, an operation frame, and the like. After the detection personnel operates on specific detection items in the detection item list, the device implementing the method corresponding to the embodiment of the invention can acquire the time labels corresponding to the operation, for example, the detection personnel starts the detection of the appearance detection items at 10:00 and terminates the detection of the appearance detection items at 10:03 minutes.
The time stamp adjustment factor of the embodiment of the present invention is a factor for adjusting the time stamp for starting the detection item forward or backward, and the factor may be a specific time stamp value, for example, 15 seconds. When a detection person operates on a specific detection item in the detection item list to actually detect a terminal to be detected, a certain time tag error may exist. For example, after the appearance detection item is started, the detection personnel starts to actually detect the appearance of the terminal to be detected after 15 seconds, so that the video shot 15 seconds after the appearance detection item is started is the video start tag corresponding to the real appearance detection item.
202 adjusting the time labels corresponding to the detection items in the detection item list according to the adjustment factor.
The adjustment factor in the embodiment of the present invention may be a fixed value; the numerical value may be set specifically according to different detection items, for example, 15 seconds for the appearance detection item, 25 seconds for the touch panel detection item, and the like. And adjusting the time labels corresponding to the detection items forward or backward on the time labels corresponding to the detection items in the detection item list according to the time label factors of the corresponding detection items.
By the method, the video segment corresponding to the detection item can be more accurately positioned, and a user can conveniently know the content of the shot video in a targeted manner.
Fig. 3 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiment shown in fig. 1 and 2. The specific process of the embodiment of the invention is as follows:
301, acquiring a shooting video in a detection process for a terminal to be detected.
302, a time tag for detecting a detection item of a terminal to be detected is obtained.
303 obtaining the self-evaluation list and the detection item list input by the user.
Before the user carries out transaction or detection on the terminal to be detected, the user can correspondingly evaluate the terminal to be detected. The user may perform corresponding input in performance items corresponding to the terminal to be detected, where the performance items may be consistent with the detection items of the terminal to be detected, or may include more items or only include a part of the detection items. The user inputs a self-evaluation list (including but not limited to manual input, voice input, image input and the like) of the terminal to be detected through the electronic device, wherein the self-evaluation list includes an evaluation result of the user on the software and hardware performance of the terminal, and the evaluation result can be represented by text information such as characters, pictures or identifiers, for example, whether scratches exist in the appearance, the touch screen is good in performance and the like.
In the process of detecting the terminal to be detected, corresponding detection items, such as appearance detection items, can be formed as required to detect the appearance of the terminal to be detected so as to determine the appearance performance condition of the terminal to be detected; detecting a touch screen detection item to detect the touch screen performance of the terminal to be detected so as to determine the touch performance condition of the terminal to be detected; and detecting the camera performance of the terminal to be detected according to the camera detection item so as to determine the camera performance of the terminal to be detected.
During or after the detection of the detection item corresponding to the terminal to be detected is completed, a corresponding detection item list of the terminal to be detected is formed, wherein the detection item list comprises the detection result of the item corresponding to the terminal to be detected. Specifically, the detection result of the terminal to be detected corresponding to the detection item may be represented by text information such as a character, a picture, or an identifier, for example, whether the appearance has a scratch or not, and the performance of the touch screen is good.
304 comparing the self-appraisal list and the detection item list input by the user to determine the difference items of the self-appraisal list and the detection item list input by the user.
The method comprises the steps of extracting an evaluation result corresponding to a self-evaluation list input by a user and a detection result in a detection item list, and comparing the two results, specifically, comparing the same items in the self-evaluation list, for example, comparing an appearance performance item input by the user with an appearance detection item determined after detection by a detection person. If the result in the self-evaluation list input by the user is inconsistent with the result in the detection item list, determining that the detection item is a difference item, for example, the appearance described in the appearance performance item input by the user is scratch-free, but after the detection personnel detects the terminal to be detected, the appearance described in the evaluated appearance detection item is scratch-free. At this time, the appearance detection item is the difference item according to the embodiment of the present invention. On the contrary, if the self-evaluation list input by the user is consistent with the result in the detection item list, the self-evaluation list is an item (also called a non-difference item) except the difference item.
305, segmenting the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture.
306, coding the video segment corresponding to the difference item by adopting a first coding mode; and coding the video segments except the difference item by adopting a second coding mode.
The video encoding method is a method of converting a file of a certain video format into a file of another video format by a specific compression technique. The coding and decoding standards include but are not limited to the coding standards of International Union H.261, H.263, H.264, M-JPEG, MPEG series standards, RealVideo, WMV, QuickTime, etc. Different standard encoding methods have an influence on the resolution of video pictures, picture distortion and the like. The first encoding method and the second encoding method of the embodiment of the present invention may select one or more encoding methods of the above encoding standards to encode the captured video.
In the process of detecting the terminal to be detected, a user can relatively care about the difference item between the performance index actively evaluated by the user and the detection index, so that the difference item is expected to have a high-quality video picture so as to be conveniently compared and viewed by the user. One possible way is that the first encoding mode uses an encoding mode that has less influence on the quality of the video picture to encode the video, and the second encoding mode uses an encoding mode that has a relatively high data compression rate to encode the video.
By the method, the data volume of the shot video can be reduced while the video quality of the video clip concerned by the user is ensured.
Fig. 4 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other parts not shown refer to the embodiment shown in fig. 1-3.
The embodiment of the invention specifically comprises the following steps:
401, acquiring a shot video in a detection process of a terminal to be detected, wherein the shot video comprises a time tag of a video picture.
402, acquiring a time tag for detecting a detection item of a terminal to be detected.
403, adding a label to the shot video according to the time label for detecting the terminal detection item to be detected and the time label of the video picture, so as to segment the shot video.
According to the time tag determined by the embodiment of the invention, different progress ranges of the complete video are distinguished by marking in the shot video, namely, the shot video is segmented. The different progress ranges of the complete video are distinguished by marking, namely, the video is marked on the progress of the video, for example, the progress bar is marked (by small dots, characters, images and the like) so as to distinguish the different progress ranges of the complete video.
By the method of the embodiment of the invention, the video can be segmented in the complete video through the mark, so that a user can conveniently and quickly locate the specific content.
Fig. 5 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiment shown in fig. 1-4.
The embodiment of the invention specifically comprises the following steps:
501, a shot video in a detection process of a terminal to be detected is obtained, wherein the shot video comprises a time tag of a video picture.
502, acquiring a time tag for detecting a part of detection items in the detection items of the terminal to be detected.
The time tag for detecting the detection items in the embodiment of the invention refers to the time range or time point information for detecting one or more detection items. For example, the time range may be 10:00 to 10:03, or the time point may be 10: 00.
The method of the embodiment of the invention can acquire the time tags for detecting part of the detection items in the detection items of the terminal to be detected, namely the time tags for detecting one or more detection items in the detection items of the terminal to be detected. Specifically, a preset detection item may be acquired, and then a time stamp for detecting the set detection item is acquired in a targeted manner according to the set detection item; or the time tag for detecting part of the detection items in the detection items of the terminal to be detected can be acquired in a targeted manner according to the selection information input by the detection personnel or the user.
503 adding a label on the shot video according to the time label for detecting part of the detection items of the terminal to be detected and the time label of the video frame, so as to segment the shot video.
According to the time tag determined by the embodiment of the invention, different progress ranges of the complete video are distinguished by marking in the shot video, namely, the shot video is segmented. The different progress ranges of the complete video are distinguished by marking, namely, the video is marked on the progress of the video, for example, the progress bar is marked (by small dots, characters, images and the like) so as to distinguish the different progress ranges of the complete video.
By the method provided by the embodiment of the invention, the video is segmented in a targeted manner, and a user can conveniently and quickly position the set detection content.
Fig. 6 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other parts not shown refer to the embodiment shown in fig. 1-5.
601, acquiring a shot video in the detection process of the terminal to be detected.
602, a self-evaluation list input by a user and a detection item list are obtained.
603 comparing the self-appraisal list and the detection item list input by the user to determine the difference items of the self-appraisal list and the detection item list input by the user.
The method comprises the steps of extracting an evaluation result corresponding to a self-evaluation list input by a user and a detection result in a detection item list, and comparing the two results, specifically, comparing the same items in the self-evaluation list, for example, comparing an appearance performance item input by the user with an appearance detection item determined after detection by a detection person. If the result in the self-evaluation list input by the user is inconsistent with the result in the detection item list, determining that the detection item is a difference item, for example, the appearance described in the appearance performance item input by the user is scratch-free, but after the detection personnel detects the terminal to be detected, the appearance described in the evaluated appearance detection item is scratch-free. At this time, the appearance detection item is the difference item according to the embodiment of the present invention. On the contrary, if the self-evaluation list input by the user is consistent with the result in the detection item list, the self-evaluation list is an item (also called a non-difference item) except the difference item.
604, a time tag for detecting the detection item corresponding to the terminal difference item to be detected is obtained.
The time tag for detecting the detection items in the embodiment of the invention refers to the time range or time point information for detecting one or more detection items. For example, the time range may be 10:00 to 10:03, or the time point may be 10: 00.
And acquiring a time tag for detecting the detection items corresponding to the difference items in the detection items of the terminal to be detected. For example, if the result detected by the detector in the appearance detection item is inconsistent with the result in the self-evaluation list input by the user, that is, if the result is determined to be a difference item, the time stamp for detecting the appearance detection item is obtained.
605 adding a tag to the shot video according to the time tag for detecting the detection item corresponding to the terminal difference item to be detected, so as to segment the shot video.
According to the time tag determined by the embodiment of the invention, different progress ranges of the complete video are distinguished by marking in the shot video, namely, the shot video is segmented. The different progress ranges of the complete video are distinguished by marking, namely, the video is marked on the progress of the video, for example, the progress bar is marked (by small dots, characters, images and the like) so as to distinguish the different progress ranges of the complete video.
By the method provided by the embodiment of the invention, the difference item can be marked in a key way, so that a user can conveniently and quickly position the video content of the difference item part.
The embodiment of the invention also provides another possible implementation mode, and other parts which are not shown refer to the embodiment shown in the figures 1 to 6. The specific contents are as follows:
and acquiring a shooting video in the detection process of the terminal to be detected.
And acquiring a self-evaluation list and a detection item list input by a user.
And comparing the self-evaluation list input by the user with the detection item list to determine the difference items of the self-evaluation list input by the user and the detection item list.
The acquiring of the time tag for detecting the detection item of the terminal to be detected comprises the following steps:
and acquiring a time tag for detecting the detection item corresponding to the terminal difference item to be detected.
The tagging the captured video according to the time tag to segment the captured video comprises:
adding a difference item label on the shot video according to a time label for detecting a detection item corresponding to a terminal difference item to be detected; and adding a non-difference item label on the shot video according to the time label for detecting the corresponding detection item except the terminal difference item to be detected so as to segment the shot video. The label of the differential item is different from the label of the non-differential item in marking mode, specifically, the label of the differential item is distinguished from the label of the non-differential item in shape, text or color of the marking pattern. For example, for a video corresponding to the difference item, a black dot is adopted to mark the video progress; and marking the video corresponding to the non-difference item on the video progress by adopting a red dot.
By the method provided by the embodiment of the invention, the video content of the difference item part can be conveniently and quickly positioned by a user through distinguishing and displaying the difference item.
Fig. 7 is a block diagram of a processing apparatus for detecting video according to an embodiment of the present invention. The detection video processing device of the embodiment of the present invention may include all of the components or apparatuses shown in fig. 7, or may lack some of them. As shown in fig. 1, the processing device 700 may include a power supply 710 to provide power to other modules, a processor 720, a camera 730, a memory 740, a display device 750, and an input device 760. The memory 740 stores computer programs including an operating system program 7422, an application program 7421, and the like. The processor 720 is operable to read the computer programs in the memory 740 and then execute the computer program-defined methods, such as the processor 720 reading an operating system program 7422 to run an operating system on the processing device to perform various functions of the operating system or reading one or more application programs 7421 to run applications on the processing device.
Processor 720 may include one or more processors, for example, processor 720 may include one or more central processors, or one central processor and one graphics processor. When processor 720 includes multiple processors, the multiple processors may be inherited on the same chip or may be separate chips. A processor may include one or more processing cores.
The camera 730 is used for taking pictures or videos, and specifically may be a camera, or the like.
The memory 740 also stores other data 7423 in addition to computer programs, which other data 7423 may include data generated by the execution of an operating system 7422 or application programs 7421, including system data (e.g., operating system configuration parameters) and user data, such as data generated during the execution of processes.
Storage 740 typically includes memory 741 and external memory 742. Memory 741 may be Random Access Memory (RAM), Read Only Memory (ROM), CACHE (CACHE), etc. The storage space of the embodiment of the present invention may include a flash memory (flash), a hard disk, an optical disk, a USB disk, a floppy disk, or a tape drive. Computer programs are typically stored on external memory 742, from which processor 720 loads computer programs into memory 741 before performing processing.
The display device 750 is used for displaying the running information of the computer program in the processing device, and may specifically include a display screen, a projector, and the like.
The input device 760 is a device for inputting data and information to the processing device, and may specifically include a keyboard, a touch screen, and a microphone, and in some cases, the camera device may also serve as an input device.
The processing device for detecting video according to the embodiment of the present invention implements the above-described segmentation method for detecting video when the processor 820 executes the computer program.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for detecting video segmentation is implemented.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is a logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A method for detecting segments of a video, the method comprising:
acquiring a shot video in a detection process aiming at a terminal to be detected, wherein the shot video comprises a time tag of a video picture;
acquiring a time tag for detecting a detection item of a terminal to be detected, wherein the time tag for detecting the detection item is time corresponding to the operation of an operator on the detection item;
segmenting the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture, specifically, dividing the shot video into video segments corresponding to the detection item according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture;
acquiring a self-evaluation list and a detection item list input by a user;
comparing the self-evaluation list input by the user with the detection item list to determine the difference items of the self-evaluation list input by the user and the detection item list;
coding the video segment corresponding to the difference item by adopting a first coding mode; and coding the video segments except the difference item by adopting a second coding mode.
2. The method of claim 1, wherein the method further comprises:
acquiring a time tag adjustment factor;
and adjusting the time labels corresponding to the detection items in the detection item list according to the adjusting factor.
3. The method according to claim 1 or 2, wherein the segmenting the captured video according to the time stamp for detecting the terminal detection item to be detected and the time stamp of the video picture comprises:
and adding a label on the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture so as to segment the shot video.
4. The method of claim 3, wherein the obtaining the time tag for detecting the detection item of the terminal to be detected comprises:
acquiring a time tag for detecting partial detection items in the detection items of the terminal to be detected;
adding a label on the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture, so as to segment the shot video, wherein the step of adding the label on the shot video comprises the following steps:
and adding a label on the shot video according to the time label for detecting part of the detection items in the detection items of the terminal to be detected and the time label of the video picture so as to segment the shot video.
5. The method of claim 3, wherein the method further comprises:
acquiring a self-evaluation list and a detection item list input by a user;
comparing the self-evaluation list input by the user with the detection item list to determine the difference items of the self-evaluation list input by the user and the detection item list;
the acquiring of the time tag for detecting the detection item of the terminal to be detected comprises the following steps:
acquiring a time tag for detecting a detection item corresponding to a terminal difference item to be detected;
adding a label on the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture, so as to segment the shot video, wherein the step of adding the label on the shot video comprises the following steps:
and adding a label on the shot video according to a time label for detecting a detection item corresponding to a terminal difference item to be detected and the time label of the video picture so as to segment the shot video.
6. The method of claim 3, wherein the method further comprises:
acquiring a self-evaluation list and a detection item list input by a user;
comparing the self-evaluation list input by the user with the detection item list to determine the difference items of the self-evaluation list input by the user and the detection item list;
the acquiring of the time tag for detecting the detection item of the terminal to be detected comprises the following steps:
acquiring a time tag for detecting a detection item corresponding to a terminal difference item to be detected;
adding a label on the shot video according to the time label for detecting the detection item of the terminal to be detected and the time label of the video picture, so as to segment the shot video, wherein the step of adding the label on the shot video comprises the following steps:
adding a first tag on the shot video according to a time tag for detecting a detection item corresponding to a terminal difference item to be detected; and adding a second label on the shot video according to the time label for detecting the corresponding detection item except the terminal difference item to be detected so as to segment the shot video.
7. A processing device for detecting video, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for detecting segmentation of video according to any one of claims 1 to 6 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method of detecting segmentation of a video according to any one of claims 1 to 6.
CN201710637951.3A 2017-07-31 2017-07-31 Segmentation method and device for detecting video and storage medium Active CN107333189B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710637951.3A CN107333189B (en) 2017-07-31 2017-07-31 Segmentation method and device for detecting video and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710637951.3A CN107333189B (en) 2017-07-31 2017-07-31 Segmentation method and device for detecting video and storage medium

Publications (2)

Publication Number Publication Date
CN107333189A CN107333189A (en) 2017-11-07
CN107333189B true CN107333189B (en) 2020-10-02

Family

ID=60200840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710637951.3A Active CN107333189B (en) 2017-07-31 2017-07-31 Segmentation method and device for detecting video and storage medium

Country Status (1)

Country Link
CN (1) CN107333189B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189957B (en) * 2018-08-30 2022-05-31 维沃移动通信有限公司 Media data processing method and equipment
CN109783308A (en) * 2018-12-13 2019-05-21 深圳市收收科技有限公司 A kind of method and terminal device of terminal detection
CN109905779B (en) * 2019-03-25 2021-05-18 联想(北京)有限公司 Video data segmentation method and device and electronic equipment
CN115100739B (en) * 2022-06-09 2023-03-28 厦门国际银行股份有限公司 Man-machine behavior detection method, system, terminal device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542255B2 (en) * 2009-12-17 2013-09-24 Apple Inc. Associating media content items with geographical data
CN102645924B (en) * 2012-04-28 2014-03-26 成都西物信安智能系统有限公司 Control system for track transportation vehicle underbody safety check
CN104754267A (en) * 2015-03-18 2015-07-01 小米科技有限责任公司 Video clip marking method, device and terminal
CN105049768A (en) * 2015-08-18 2015-11-11 深圳市为有视讯有限公司 Video playback method of video recording equipment
CN106131627B (en) * 2016-07-07 2019-03-26 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, apparatus and system
CN106485964B (en) * 2016-10-19 2019-04-02 深圳市鹰硕技术有限公司 A kind of recording of classroom instruction and the method and system of program request

Also Published As

Publication number Publication date
CN107333189A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107333189B (en) Segmentation method and device for detecting video and storage medium
US8675065B2 (en) Video monitoring system
US8854474B2 (en) System and method for quick object verification
US9001206B2 (en) Cascadable camera tampering detection transceiver module
US20160162497A1 (en) Video recording apparatus supporting smart search and smart search method performed using video recording apparatus
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN111753701B (en) Method, device, equipment and readable storage medium for detecting violation of application program
US20160381320A1 (en) Method, apparatus, and computer program product for predictive customizations in self and neighborhood videos
CN110781823B (en) Screen recording detection method and device, readable medium and electronic equipment
CN112333467A (en) Method, system, and medium for detecting keyframes of a video
CN109359582B (en) Information searching method, information searching device and mobile terminal
CN108763350B (en) Text data processing method and device, storage medium and terminal
CN108303237B (en) Terminal screen detection method, detection device and storage medium
CN115909127A (en) Training method of abnormal video recognition model, abnormal video recognition method and device
US11348254B2 (en) Visual search method, computer device, and storage medium
CN107370977B (en) Method, equipment and storage medium for adding commentary in detection video
CN107886518A (en) Picture detection method, device, electronic equipment and read/write memory medium
US20170013309A1 (en) System and method for product placement
CN101339662B (en) Method and device for creating video frequency feature data
CN107454267A (en) The processing method and mobile terminal of a kind of image
CN107360460B (en) Method, device and storage medium for detecting video added subtitles
CN108776959B (en) Image processing method and device and terminal equipment
CN111626369B (en) Face recognition algorithm effect evaluation method and device, machine readable medium and equipment
Panchal et al. Multiple forgery detection in digital video based on inconsistency in video quality assessment attributes
CN111428806B (en) Image tag determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 518000 building 20, Shenzhen International Innovation Center, No. 1006, Shennan Road, Futian District, Shenzhen, Guangdong, Guangdong

Patentee after: SHENZHEN HUISHOUBAO TECH Co.,Ltd.

Address before: 518000 building 7, building 8, Granville Software Park, Nanshan District science and Technology Park, Shenzhen, Guangdong

Patentee before: SHENZHEN HUISHOUBAO TECH Co.,Ltd.