CN107370977B - Method, equipment and storage medium for adding commentary in detection video - Google Patents

Method, equipment and storage medium for adding commentary in detection video Download PDF

Info

Publication number
CN107370977B
CN107370977B CN201710638015.4A CN201710638015A CN107370977B CN 107370977 B CN107370977 B CN 107370977B CN 201710638015 A CN201710638015 A CN 201710638015A CN 107370977 B CN107370977 B CN 107370977B
Authority
CN
China
Prior art keywords
video
detection
detection item
detected
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710638015.4A
Other languages
Chinese (zh)
Other versions
CN107370977A (en
Inventor
叶飞
何帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Recycling Bao Technology Co Ltd
Original Assignee
Shenzhen Recycling Bao Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Recycling Bao Technology Co Ltd filed Critical Shenzhen Recycling Bao Technology Co Ltd
Priority to CN201710638015.4A priority Critical patent/CN107370977B/en
Publication of CN107370977A publication Critical patent/CN107370977A/en
Application granted granted Critical
Publication of CN107370977B publication Critical patent/CN107370977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • H04N5/9202Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal the additional signal being a sound signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a method for detecting commentary added in a video, which is characterized by comprising the following steps: acquiring a shooting video in the detection process of a terminal to be detected; acquiring a detection item list of a terminal to be detected, wherein the detection item list comprises detection item description information for detecting the terminal to be detected; generating detection video comment information according to the detection item list of the terminal to be detected; adding the detected video commentary information to the captured video. The embodiment of the invention also provides a device for detecting the addition of the commentary in the video and a storage medium. The method, the device and the storage medium for adding the comment into the detection video provided by the embodiment of the invention can facilitate the user to know the process of terminal detection.

Description

Method, equipment and storage medium for adding commentary in detection video
Technical Field
The invention belongs to the technical field of videos, and particularly relates to a method, equipment and a storage medium for adding commentary in a detected video.
Background
Before the terminal is recycled, the terminal detection equipment is required to detect the terminal in advance so as to evaluate the value of the terminal, wherein the terminal can be a mobile phone, a tablet computer or intelligent wearable equipment and the like. At present, a common recycling method is to mail the terminal to a terminal recycling company or a third-party evaluation organization to evaluate the value of the terminal, and the terminal recycling company or the third-party evaluation organization sends an evaluation result back to a corresponding terminal owner. However, since the whole evaluation process has no corresponding record, the evaluation process is an invisible operation for the terminal owner, and the terminal owner wants to be able to know the detection process, so it is necessary to provide a method for recording the evaluation process to inform the terminal owner of the relevant detection process.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a storage medium for adding commentary to a detected video.
The embodiment of the invention provides a method for adding an explanation in a detected video, which is characterized by comprising the following steps:
acquiring a shooting video in the detection process of a terminal to be detected;
acquiring a detection item list of a terminal to be detected, wherein the detection item list comprises detection item description information for detecting the terminal to be detected;
generating detection video comment information according to the detection item list of the terminal to be detected;
adding the detected video commentary information to the captured video.
The device for adding commentary in the detected video is characterized by comprising a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the method for adding commentary in the detected video.
An embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the method for detecting an added comment in a video.
The method, the device and the storage medium for adding the explanation in the detection video provided by the embodiment of the invention can facilitate the user to know the process of terminal detection.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a first flowchart of a method for adding description to a detection video according to an embodiment of the present invention;
fig. 2 is a second flowchart of a method for adding explanations to a detection video according to an embodiment of the present invention; (ii) a
Fig. 3 is a third flowchart of a method for adding description to a detection video according to an embodiment of the present invention;
fig. 4 is a fourth flowchart illustrating a method for adding commentary to a detected video according to an embodiment of the present invention;
fig. 5 is a fifth flowchart illustrating a method for adding commentary to a detected video according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for adding an illustration in a detection video according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a first flowchart of a method for adding comments to a detected video according to an embodiment of the present invention. The method for adding the comment to the detection video provided by the embodiment of the invention is to edit the video shot in the terminal detection process so as to add corresponding comment information, wherein the specific comment information can be one or a combination of a plurality of voice information, text information, image information and animation information, such as subtitles, barrage, floating animation, video voice and the like. The method for adding the comment in the detection video comprises the following steps:
101, acquiring a shooting video in the detection process of a terminal to be detected.
The terminal of the embodiment of the present invention includes, but is not limited to, an electronic device such as a mobile phone, a tablet computer, a PDA (personal digital Assistant), an MP3, an MP4, and the like. The terminal to be detected refers to a terminal whose performance needs to be determined through detection, and the specific performance includes software performance, hardware performance and the like. Software capabilities include, but are not limited to, system software and application software related capabilities; the hardware performance includes but is not limited to the hardware performance of a touch screen, a terminal appearance, a camera and the like. If a terminal user needs to transact or evaluate the terminal of the terminal user, corresponding detection is needed for the terminal, namely, the performance parameters of the terminal are detected. After detection, the value of the terminal can be evaluated according to the detection result.
In the detection process, a video is shot through a camera device so as to record the detection process of the terminal to be detected. The shot video can be High Definition (HD) video or Standard Definition (SD) video, and the specific video can be source data video or video coded by coding standards such as H.264, MPEG-2, MPEG-4 and the like. The method for acquiring the shot video in the detection process of the terminal to be detected in the embodiment of the invention specifically comprises the steps that the equipment for implementing the method in the embodiment of the invention comprises a camera device, and the camera device included in the equipment is used for shooting the detection process of the terminal to be detected so as to acquire the shot video in the detection process of the terminal to be detected; or receiving the shot video of the detection process of the terminal to be detected shot by other camera devices, for example, receiving the shot video of the detection process of the terminal to be detected through a wired or wireless communication device.
102, a detection item list of a terminal to be detected is obtained, wherein the detection item list comprises detection item description information for detecting the terminal to be detected.
In the process of detecting the terminal to be detected, corresponding detection items, such as appearance detection items, can be formed as required to detect the appearance of the terminal to be detected so as to determine the appearance performance condition of the terminal to be detected; detecting a touch screen detection item to detect the touch screen performance of the terminal to be detected so as to determine the touch performance condition of the terminal to be detected; and detecting the camera performance of the terminal to be detected according to the camera detection item so as to determine the camera performance of the terminal to be detected.
During or after the detection of the detection item corresponding to the terminal to be detected is completed, a corresponding detection item list of the terminal to be detected is formed, wherein the detection item list comprises description information of the detection item corresponding to the terminal to be detected. Specifically, the description information of the detection item corresponding to the terminal to be detected may be represented by text, picture, voice, or identifier, for example, "scratch exists in appearance", "good performance of touch screen", and the like.
103, generating the detection video comment information according to the detection item list of the terminal to be detected.
The video commentary information of the embodiment of the invention includes, but is not limited to, one or more combinations of voice information, text information, image information and animation information, and specifically, may be individual voice information, individual text information, individual image information, individual animation information, or a combination of voice information and text information, or a combination in other forms, and the like.
The text information may be a video subtitle and is displayed in a specific area of the video playing interface, for example, below the video playing interface, to help a user to know the content of the video, wherein the text information may be displayed on the video screen in the form of animation, characters, symbols, emoji, and the like. The video captions can be caption pictures with text information, and then the caption pictures are connected into the video pictures; the video subtitle may be text information of a set format (for example, text information of formats such as srt and ssa formats), and may be superimposed on the video screen; the video subtitle may also be text information added on a video picture.
The voice information can be a voice file embedded in the video, and the voice file is played synchronously with the video picture, so that the audio information corresponding to the video picture is provided for the user, and the like. The voice files of embodiments of the present invention may also be referred to in some cases as audio sources, audio files, etc., audio files playable by a Digital Video Disk (DVD) player, audio files playable by a Compact Disk (CD) player, audio files playable by a digital video camera (DVR), audio files playable by a laser disk player, and audio files playable by other suitable audio media players, etc.
The image information and the animation information can be embedded in the video picture or can be displayed in a floating way in the video picture, and visual information except the video picture is provided for a user.
The video commentary information of the detected video is correspondingly generated according to the detection item list, and specifically, the video commentary information can be called, extracted or converted into corresponding video commentary information according to detection result information in the detection item list. Specifically, the corresponding video commentary information may be called from a pre-stored channel such as a database according to the detection result information, or the corresponding video commentary information may be extracted from a pre-stored audio or video file, or the detection result may be converted into the corresponding video commentary information. Taking the description information as the voice information as an example, specifically, the method may include calling a voice file corresponding to the detection result from a pre-stored voice database according to the detection result, where a mapping relationship between the voice file and the corresponding detection result is established in the voice database, and directly calling the corresponding voice file according to the mapping relationship; or extracting a corresponding voice file from one or more pre-stored audio or video files according to the detection result; or converting the text information in the detection result into corresponding voice information and forming a voice file.
104 adding the detected video commentary information to the captured video.
In the embodiment of the present invention, the video comment information is added to the shot video, and comment information (for example, characters, pictures, etc.) may be embedded into the shot video, that is, the comment information is fused with the shot video; or adding the video commentary information into the shot video, for example, adding a voice file into the shot video to ensure that corresponding voice information is sent out in the process of playing the video; or narration information (such as characters, pictures and the like) is displayed in a floating mode in the shot video.
If the video commentary information is a voice file, namely a video voice file is detected, the produced video voice file can replace the original voice file contained in the shot video, namely the voice file recorded in the shooting process of the shot video.
Specifically, the obtained text information may be synthesized with a shot video in the detection process of the terminal to be detected, for example, a subtitle picture of a video subtitle is synthesized into the shot video; or overlapping the text information corresponding to the video subtitles to the shot video picture; or synthesizing the video subtitle and the video picture into a new video picture.
By the method and the device, a user can conveniently understand the detected related content in the detected video of the terminal to be detected.
Fig. 2 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiment shown in fig. 1. The specific process of the embodiment of the invention is as follows:
acquiring a shooting video in the detection process of a terminal to be detected;
acquiring a detection item list of a terminal to be detected, wherein the detection item list comprises detection item description information for detecting the terminal to be detected;
generating detection video comment information according to the detection item list of the terminal to be detected, wherein the detection video comment information is a voice file, namely a detection video voice file;
adding the detected video commentary information to the captured video.
The generating of the video and voice detection file according to the detection item list of the terminal to be detected comprises the following steps:
and 201, calling a voice file corresponding to the detection item evaluation result in a video detection voice database according to the detection item evaluation result in the detection item list.
During or after the detection of the detection item corresponding to the terminal to be detected is completed, a corresponding detection item list of the terminal to be detected is formed, wherein the detection item list comprises the detection result of the item corresponding to the terminal to be detected. Specifically, the detection result of the terminal to be detected corresponding to the detection item may be represented by text information such as a character, a picture, or an identifier, for example, "scratch exists in appearance," the performance of the touch screen is good, "and the like.
The method comprises the steps of establishing a video detection voice database of a detection item corresponding to a terminal to be detected in advance, wherein the database comprises a voice file corresponding to a detection result of the detection item of the terminal to be detected, namely establishing a mapping relation between the detection result of the detection item and the detection video voice file, and finding the corresponding voice file according to the detection result. For example, "scratch in appearance" would have a corresponding voice file in the video detected voice database, and "good touch screen performance" would also have a corresponding voice file in the video detected voice database.
202, determining the voice corresponding to the evaluation result of the calling detection item as a detection video voice file.
And calling a voice file of a corresponding evaluation result in a video detection voice database according to the evaluation result of the detection item and the mapping relation, and determining the voice file as a detection video voice file.
By the method and the device, a user can conveniently understand the detected related content in the process of watching the detected video of the terminal to be detected.
Fig. 3 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiment shown in fig. 1 or 2. The specific process of the embodiment of the invention is as follows:
acquiring a shooting video in the detection process of a terminal to be detected;
acquiring a detection item list of a terminal to be detected, wherein the detection item list comprises detection item description information for detecting the terminal to be detected;
generating detection video comment information according to the detection item list of the terminal to be detected, wherein the detection video comment information is a voice file, namely a detection video voice file;
adding the detected video commentary information to the captured video.
Wherein adding the detected video commentary information to the captured video comprises:
301, the detection video comment information corresponding to the detection item in the detection item list and the video segment corresponding to the detection item in the detection item list are obtained.
The detection item list of the embodiment of the present invention includes one or more detection items, and the specific detection items may include, but are not limited to, appearance detection items, touch screen detection items, display screen detection items, camera detection items, and the like. Each detection item includes one or more detection results, and the specific detection result may be represented by text information such as a text, a picture, or an identifier, for example, descriptions such as "scratch exists in appearance", "good performance of touch screen", and the like.
When the detection video comment information is generated, detection video comment information corresponding to detection items is respectively formed according to results of specific detection items, that is, each detection item has detection video comment information matched with the detection item, for example, an appearance detection item has detection video comment information corresponding to an appearance condition; the touch screen detection items have detection video description information corresponding to the touch screen condition.
Meanwhile, in the process of detecting the detection item, a video segment corresponding to the detection item is formed, for example, a video shot in the process of detecting the appearance detection item is a video segment corresponding to the appearance detection item; and the video shot in the detection process of the touch screen detection item is a video segment corresponding to the touch screen detection item. The video segment in the embodiment of the invention can be a single video segment, namely a video formed by one or more single files corresponding to one detection item; or may be part of a video.
302, adding the detected video comment information corresponding to the detected item in the detected item list to the video segment corresponding to the detected item in the detected item list.
Adding the detected video comment information corresponding to the detected item in the obtained detected item list to a video segment corresponding to the detected item in the detected item corresponding to the detected item, for example, adding the detected video comment information corresponding to the appearance detected item to a video segment corresponding to the appearance detected item; and adding the detection video comment information corresponding to the touch screen detection item into the video segment corresponding to the touch screen detection item.
By the method, the detection video comment information corresponding to the detection item can be added to the corresponding video segment, and a user can know the content of the detection video conveniently.
Fig. 4 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiments shown in fig. 1, 2 or 3.
In the embodiment shown in fig. 3, acquiring a video segment corresponding to a detection item in the detection item list includes:
401 obtains the time tag corresponding to the detection item in the detection item list.
In the process of detecting a terminal to be detected, corresponding detection items are displayed for a detection person to select, specifically, one or more items in a detection item list may be displayed on a display screen, and a user operates a specific detection item in the detection item list through an input device such as a mouse, a keyboard, or a touch screen, for example, by clicking a corresponding operation icon or an operation frame, which is an interface element for operation, where the interface element includes, but is not limited to, an icon, a button, an operation frame, and the like.
The time stamp of the embodiment of the present invention is a stamp used to indicate that an operation is performed on a detection item or time information in a captured video in a detection process, and specifically, after a detection person operates on a specific detection item in a detection item list, a device implementing the method according to the embodiment of the present invention may acquire the time information corresponding to the operation and determine the information as the time stamp of the detection item, for example, when the detection person starts detection of an appearance detection item at 10:00, and terminates detection of the appearance detection item at 10:03 time, the start time stamp of the appearance detection item is 10:00, and the termination time stamp is 10: 03.
402, determining the video shot in the time tag range corresponding to the detection item in the detection item list as the video segment corresponding to the detection item in the detection item list.
The camera device shoots a camera in the detection process of the detection personnel for the terminal to be detected to form a corresponding shooting video. In the process of shooting, time tags corresponding to corresponding video pictures are recorded, for example, corresponding time tags are recorded in videos shot in the time period of 10:00 to 10: 03. After the acquired time tag of the corresponding detection item is carried out by the detection personnel, the corresponding video segment in the time tag corresponding to the shot video picture can be determined as the video segment corresponding to the detection item. For example, when the user starts the appearance detection item detection at 10:00 and terminates the appearance detection item detection at 10:03 minutes, the video captured in the time period of 10:00 to 10:03 minutes is determined as the video segment corresponding to the appearance detection item.
Fig. 5 is a partial flowchart of another implementation manner provided in the embodiment of the present invention. Other unrepresented parts refer to the embodiments shown in fig. 1, 2 or 3.
In the embodiment shown in fig. 4, acquiring a video segment corresponding to a detection item in the detection item list includes:
501 obtains a time tag adjustment factor.
In the process of detecting a terminal to be detected, corresponding detection items are displayed for a detection person to select, specifically, one or more items in a detection item list may be displayed on a display screen, and a user operates a specific detection item in the detection item list through an input device such as a mouse, a keyboard, or a touch screen, for example, by clicking a corresponding operation icon or an operation frame, which is an interface element for operation, where the interface element includes, but is not limited to, an icon, a button, an operation frame, and the like. After the detection personnel operates on specific detection items in the detection item list, the device implementing the method corresponding to the embodiment of the invention can acquire the time labels corresponding to the operation, for example, the detection personnel starts the detection of the appearance detection items at 10:00 and terminates the detection of the appearance detection items at 10:03 minutes.
The time stamp adjustment factor of the embodiment of the present invention is a factor for adjusting the time stamp for starting the detection item forward or backward, and the factor may be a specific time value, for example, 15 seconds. In the process from the operation of the detection personnel on specific detection items in the detection item list to the actual detection on the terminal to be detected, a certain time difference may exist. For example, after the appearance detection item is started, the detection personnel starts to actually detect the appearance of the terminal to be detected after 15 seconds, so that the video shot 15 seconds after the appearance detection item is started is the video start tag corresponding to the real appearance detection item.
502 adjusting the time stamp corresponding to the detection item in the detection item list according to the adjustment factor.
The adjustment factor in the embodiment of the present invention may be a fixed value; the numerical value may be set specifically according to different detection items, for example, 15 seconds for the appearance detection item, 25 seconds for the touch panel detection item, and the like. And adjusting the time labels corresponding to the detection items forward or backward on the time labels corresponding to the detection items in the detection item list according to the time factors corresponding to the detection items.
And 503, determining the video shot in the time tag range corresponding to the detection item in the adjusted detection item list as the video segment corresponding to the detection item in the detection item list.
The camera device shoots a camera in the detection process of the detection personnel for the terminal to be detected to form a corresponding shooting video. In the process of shooting, time tags corresponding to corresponding video pictures are recorded, for example, corresponding time tags are recorded in videos shot in the time period of 10:00 to 10: 03. After the acquired time tag of the corresponding detection item is carried out by the detection personnel, the corresponding video segment in the time tag corresponding to the shot video picture can be determined as the video segment corresponding to the detection item. For example, when the user starts the appearance detection item detection at 10:00 and terminates the appearance detection item detection at 10:03 minutes, and the adjustment factor of the appearance detection item is 15 seconds, the video shot in the time period of 10:00:15 to 10:03 minutes is determined as the video segment corresponding to the appearance detection item.
By the method, the video segment corresponding to the detection item can be more accurately positioned, and the accuracy of the comment information corresponding to the video content is improved.
Please refer to fig. 6, which illustrates an apparatus for detecting video add comment according to an embodiment of the present invention. The detection video editing device of the embodiment of the present invention may include all the components or apparatuses shown in fig. 6, or may lack some of the components or apparatuses. As shown in fig. 6, the apparatus 600 may include a power supply 610 to provide power to other modules, a processor 620, a camera 630, a memory 640, a display device 650, and an input device 660. The memory 640 stores computer programs including an operating system program 6422, an application program 6421, and the like. The processor 620 is operable to read the computer program in the memory 640 and then execute a method defined by the computer program, such as the processor 620 reading the operating system program 6422 to run the operating system on the editing device to implement various functions of the operating system, or reading the one or more application programs 6421 to run applications on the editing device.
Processor 620 may include one or more processors, for example, processor 620 may include one or more central processors, or one central processor and one graphics processor. When the processor 820 includes multiple processors, the multiple processors may be inherited on the same chip or may be independent chips. A processor may include one or more processing cores.
The camera 630 is used for taking pictures or videos, and may specifically be a camera, or the like.
Memory 640 also stores other data 6423 in addition to computer programs, which other data 6423 may include data generated by the execution of operating system 6422 or application programs 6421, including system data (e.g., operating system configuration parameters) and user data, such as data generated by the execution of processes.
Memory 640 typically includes memory 641 and external memory 642. The memory 641 may be a Random Access Memory (RAM), a Read Only Memory (ROM), a CACHE memory (CACHE), etc. The storage space of the embodiment of the present invention may include a flash memory (flash), a hard disk, an optical disk, a USB disk, a floppy disk, or a tape drive. The computer program is typically stored in an external memory 642, from which the processor 620 loads the computer program into the memory 641 before executing the processing.
The display device 650 is used for displaying the running information of the computer program in the editing device, and may specifically include a display screen, a projector, and the like.
The input device 660 is a device for inputting data and information to the editing device, and may specifically include a keyboard, a touch screen, and a microphone, and in some cases, the camera device may also serve as an input device.
The editing apparatus for detecting video according to the embodiment of the present invention implements the above-described editing method for detecting video when the processor 620 executes the computer program.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for editing the detection video is implemented.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is a logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (8)

1. A method of adding commentary to detected video, the method comprising:
acquiring a shooting video in the detection process of a terminal to be detected;
acquiring a detection item list of a terminal to be detected, wherein the detection item list comprises detection item description information for detecting the terminal to be detected;
generating detection video comment information according to the detection item list of the terminal to be detected, wherein the generated detection video comment information is specifically a detection video voice file;
adding the detected video commentary information into the shot video, wherein the adding the detected video commentary information into the shot video specifically includes: acquiring a detection video voice file corresponding to a detection item in a detection item list and a video segment corresponding to the detection item in the detection item list;
and adding a detection video voice file corresponding to a detection item in the detection item list into a video segment corresponding to the detection item in the detection item list.
2. The method of claim 1, wherein the detection item specification information is text information;
the generating of the video and voice detection file according to the detection item list of the terminal to be detected comprises:
and converting the text information in the detection item list of the terminal to be detected into a detection video voice file.
3. The method of claim 1, wherein the list of test items includes test item evaluation results;
the generating of the video and voice detection file according to the detection item list of the terminal to be detected comprises:
calling a voice file corresponding to the detection item evaluation result in a video detection voice database according to the detection item evaluation result in the detection item list;
and determining the voice file corresponding to the evaluation result of the calling detection item as a detection video voice file.
4. The method according to claim 1, characterized in that the addition of the detected video-audio file to the captured video is in particular:
and replacing the original voice file contained in the shooting video with the detected video voice file.
5. The method of claim 1, wherein the method further comprises:
acquiring a time tag corresponding to a detection item in the detection item list;
and determining the video shot in the time tag range corresponding to the detection item in the detection item list as the video segment corresponding to the detection item in the detection item list.
6. The method of claim 5, wherein the method further comprises:
acquiring a time tag adjustment factor;
adjusting the time labels corresponding to the detection items in the detection item list according to the adjustment factors;
and determining the video shot in the time tag range corresponding to the detection item in the adjusted detection item list as the video segment corresponding to the detection item in the detection item list.
7. An apparatus for detecting commentary added to a video, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the method for detecting commentary added to a video according to any one of claims 1 to 6.
8. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a method for detecting a comment added to a video according to any one of claims 1 to 6.
CN201710638015.4A 2017-07-31 2017-07-31 Method, equipment and storage medium for adding commentary in detection video Active CN107370977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710638015.4A CN107370977B (en) 2017-07-31 2017-07-31 Method, equipment and storage medium for adding commentary in detection video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710638015.4A CN107370977B (en) 2017-07-31 2017-07-31 Method, equipment and storage medium for adding commentary in detection video

Publications (2)

Publication Number Publication Date
CN107370977A CN107370977A (en) 2017-11-21
CN107370977B true CN107370977B (en) 2020-01-17

Family

ID=60308663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710638015.4A Active CN107370977B (en) 2017-07-31 2017-07-31 Method, equipment and storage medium for adding commentary in detection video

Country Status (1)

Country Link
CN (1) CN107370977B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110971970B (en) * 2019-11-29 2022-03-11 维沃移动通信有限公司 Video processing method and electronic equipment
CN110971964B (en) * 2019-12-12 2022-11-04 腾讯科技(深圳)有限公司 Intelligent comment generation and playing method, device, equipment and storage medium
CN113517004B (en) * 2021-06-16 2023-02-28 深圳市中金岭南有色金属股份有限公司凡口铅锌矿 Video generation method, device, terminal equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567111A (en) * 2008-04-22 2009-10-28 日立欧姆龙金融系统有限公司 Auto-trading device with cash
CN102645924A (en) * 2012-04-28 2012-08-22 成都西物信安智能系统有限公司 Control system for track transportation vehicle underbody safety check
CN105139714A (en) * 2015-10-10 2015-12-09 国电南瑞科技股份有限公司 Visualized simulation training system and method for electrified railway traction substation
CN105933666A (en) * 2016-06-19 2016-09-07 罗轶 Multi-lens travel recorder
CN107113454A (en) * 2014-10-29 2017-08-29 Dlvr公司 Configuration, which is quoted, is used for the inventory file for the infrastructure services provider that adaptive streaming transmits video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7636033B2 (en) * 2006-04-05 2009-12-22 Larry Golden Multi sensor detection, stall to stop and lock disabling system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567111A (en) * 2008-04-22 2009-10-28 日立欧姆龙金融系统有限公司 Auto-trading device with cash
CN102645924A (en) * 2012-04-28 2012-08-22 成都西物信安智能系统有限公司 Control system for track transportation vehicle underbody safety check
CN107113454A (en) * 2014-10-29 2017-08-29 Dlvr公司 Configuration, which is quoted, is used for the inventory file for the infrastructure services provider that adaptive streaming transmits video
CN105139714A (en) * 2015-10-10 2015-12-09 国电南瑞科技股份有限公司 Visualized simulation training system and method for electrified railway traction substation
CN105933666A (en) * 2016-06-19 2016-09-07 罗轶 Multi-lens travel recorder

Also Published As

Publication number Publication date
CN107370977A (en) 2017-11-21

Similar Documents

Publication Publication Date Title
US20220229536A1 (en) Information processing apparatus display control method and program
CN109729420B (en) Picture processing method and device, mobile terminal and computer readable storage medium
US10062412B2 (en) Hierarchical segmentation and quality measurement for video editing
US20160198097A1 (en) System and method for inserting objects into an image or sequence of images
CN103136746A (en) Image processing device and image processing method
KR20070026228A (en) Video processing apparatus, video processing method and program
CN105430512A (en) Method and device for displaying information on video image
CN101631220B (en) Reproducing apparatus
CN107370977B (en) Method, equipment and storage medium for adding commentary in detection video
CN106791535B (en) Video recording method and device
US11211097B2 (en) Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus
WO2009125166A1 (en) Television receiver and method
JP6999516B2 (en) Information processing equipment
CN101553814A (en) Method and apparatus for generating a summary of a video data stream
US8244005B2 (en) Electronic apparatus and image display method
CN112866776B (en) Video generation method and device
JP2011028689A (en) Moving image extraction device, program and moving image extraction method
US8988457B2 (en) Multi image-output display mode apparatus and method
CN107333189B (en) Segmentation method and device for detecting video and storage medium
CN112287771A (en) Method, apparatus, server and medium for detecting video event
CN106254939A (en) Information cuing method and device
CN106162222B (en) A kind of method and device of video lens cutting
US8437611B2 (en) Reproduction control apparatus, reproduction control method, and program
US10133408B2 (en) Method, system and computer program product
CN107360460B (en) Method, device and storage medium for detecting video added subtitles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 518000 20 / F, building a, Shenzhen International Innovation Center, 1006 Shennan Avenue, Futian District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN HUISHOUBAO TECH Co.,Ltd.

Address before: 7 / F, building 8, Weixin Software Park, Nanshan District, Shenzhen, Guangdong 518000

Patentee before: SHENZHEN HUISHOUBAO TECH Co.,Ltd.