CN111372032B - Video synthesis testing method, device, equipment and readable storage medium - Google Patents

Video synthesis testing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111372032B
CN111372032B CN202010219215.8A CN202010219215A CN111372032B CN 111372032 B CN111372032 B CN 111372032B CN 202010219215 A CN202010219215 A CN 202010219215A CN 111372032 B CN111372032 B CN 111372032B
Authority
CN
China
Prior art keywords
video
terminal set
determining
terminal
expected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010219215.8A
Other languages
Chinese (zh)
Other versions
CN111372032A (en
Inventor
王艺超
王展
胡小鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN202010219215.8A priority Critical patent/CN111372032B/en
Publication of CN111372032A publication Critical patent/CN111372032A/en
Application granted granted Critical
Publication of CN111372032B publication Critical patent/CN111372032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video synthesis testing method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a video code stream fed back to a target terminal by a multipoint conference controller; the video code stream is obtained by synthesizing and encoding preset video sources input by each current participating terminal by the multipoint conference controller; decoding the video code stream to obtain a synthesized video; identifying the synthesized video, and determining an actual terminal set of the synthesized video by using an identification result; judging whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; if not, determining that the video synthesis detection result is abnormal. In the method, manual interference is not needed, the video synthesis effect can be automatically detected, the detection efficiency can be improved compared with manual detection, and the copying test and the high-capacity test can be realized.

Description

Video synthesis testing method, device, equipment and readable storage medium
Technical Field
The invention relates to the technical field of video conferences, in particular to a video synthesis testing method, a video synthesis testing device, video synthesis testing equipment and a readable storage medium.
Background
Video composition (or called picture composition) is a most basic function of an MCU (multipoint Control Unit, also called multipoint conference controller) device in a video conference, which is a central Control device of a video conference system, or called a video conference server. In order to ensure that the picture composition function of the MCU device can be used normally and stably in a large-capacity scene, the picture composition function must be tested (e.g., the stability of picture composition, whether picture composition can be started normally) in the MCU device production process to avoid quality problems caused by the instability of the picture composition function in the MCU video experience.
Currently, a common test method for the picture composition function of the MCU device includes: and decoding and displaying the video code stream at each conference terminal, wherein manual operation is required to be performed by a tester, and a long-time cyclic test is performed to judge whether the picture composite image meets the requirements. The testing method has low automation degree in practical application, thereby resulting in low testing efficiency and difficulty in realizing copying test and large-capacity test.
In summary, the problem of how to perform automatic testing on video synthesis of a video conference is a technical problem that needs to be solved urgently by those skilled in the art at present.
Disclosure of Invention
The invention aims to provide a video synthesis testing method, a video synthesis testing device, video synthesis testing equipment and a readable storage medium, which can automatically detect the picture synthesis of a video conference, improve the detection efficiency, and realize a copy-on test and a high-capacity test.
In order to solve the technical problems, the invention provides the following technical scheme:
a video composition testing method, comprising:
acquiring a video code stream fed back to a target terminal by a multipoint conference controller; the video code stream is obtained by the multipoint conference controller after synthesizing and encoding preset video sources input by each current participating terminal;
decoding the video code stream to obtain a synthesized video;
identifying the synthesized video, and determining an actual terminal set of the synthesized video by using an identification result;
judging whether the actual terminal set is consistent with the expected terminal set;
if so, determining that the video synthesis detection result at the target terminal is normal; and if not, determining that the video synthesis detection result is abnormal.
Preferably, the process of acquiring the set of expected terminals includes:
acquiring current conference information by using a remote login service protocol;
and determining the expected terminal set for generating the picture by using the conference information.
Preferably, each preset video source is a single face video source, and each preset video source corresponds to one participant terminal; identifying the composite video, and determining an actual terminal set of the composite video by using the identification result, wherein the steps of:
carrying out face recognition on the synthesized video to obtain a face recognition result;
and determining the actual terminal set corresponding to the face recognition result according to the corresponding relation between the face, the single face video source and the conference participating terminal.
Preferably, after obtaining the video composition detection result, the method further includes:
acquiring difference information of the actual terminal set and the expected terminal set;
the difference information is used to determine the factors that cause video compositing anomalies.
Preferably, the determining whether the actual terminal set is consistent with the expected terminal set includes:
judging whether the element quantity of the actual terminal set is the same as that of the expected terminal set;
if not, determining that the actual terminal set is inconsistent with the expected terminal set;
if yes, judging whether each element of the actual terminal set is the same as each element of the expected terminal set;
if all the elements are the same, determining that the actual terminal set is consistent with the expected terminal set; and if different elements exist, determining that the actual terminal set is inconsistent with the expected terminal set.
Preferably, after obtaining the video composition detection result, the method further includes:
judging whether a participant terminal to be detected exists or not;
if so, re-determining a target terminal from the participant terminals to be tested, and executing the step of acquiring the video code stream fed back to the target terminal by the multipoint conference controller;
and if not, counting video synthesis detection results corresponding to the participant terminals to be detected.
Preferably, the determining process of the participant terminal to be tested includes:
and screening the participant terminals to be tested from all the current participant terminals.
A video composition test apparatus, comprising:
the video code stream acquisition module is used for acquiring a video code stream fed back to the target terminal by the multipoint conference controller; the video code stream is obtained by the multipoint conference controller after synthesizing and encoding preset video sources input by each current participating terminal;
the video decoding module is used for decoding the video code stream to obtain a synthesized video;
the video identification module is used for identifying the synthesized video and determining an actual terminal set of the synthesized video by using an identification result;
the video synthesis judging module is used for judging whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; and if not, determining that the video synthesis detection result is abnormal.
A video compositing test apparatus comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the video synthesis testing method when the computer program is executed.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described video composition testing method.
By applying the method provided by the embodiment of the invention, the video code stream fed back to the target terminal by the multipoint conference controller is obtained; the video code stream is obtained by synthesizing and encoding preset video sources input by each current participating terminal by the multipoint conference controller; decoding the video code stream to obtain a synthesized video; identifying the synthesized video, and determining an actual terminal set of the synthesized video by using an identification result; judging whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; if not, determining that the video synthesis detection result is abnormal.
Each participant terminal corresponds to a preset video source, each participant terminal sends the preset video source to the multipoint conference controller in the testing process, and then the multipoint conference controller carries out picture synthesis based on each preset video source, namely video synthesis and coding feedback to each participant terminal. And acquiring a video code stream in the target terminal, and then decoding to obtain a composite video. Then, the composite video is identified, and the preset video sources sent by the participant terminals can be determined to be used by the composite video. The actual set of terminals for the composite video can be determined. And then judging whether the actual terminal set is consistent with the expected terminal set, if so, indicating that the video synthesis at the target terminal is normal, otherwise, judging that the video synthesis is abnormal. Therefore, in the method, manual interference is not needed, the video synthesis effect can be automatically detected, the detection efficiency can be improved compared with manual detection, and the copying test and the high-capacity test can be realized.
Accordingly, embodiments of the present invention further provide a video composition testing apparatus, a device, and a readable storage medium corresponding to the video composition testing method, which have the above technical effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a video composition testing method according to the present invention;
FIG. 2 is a communication diagram of a video composition test according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a video composition testing apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a video composition test apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video composition testing apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a video composition testing method according to an embodiment of the present invention, the method including the following steps:
s101, video code streams fed back to a target terminal by the multipoint conference controller are obtained.
The video code stream is obtained by synthesizing and encoding preset video sources input by each current participating terminal by the multipoint conference controller.
As shown in fig. 2, in practical applications, a stream detection server may be configured to obtain a video stream. In fig. 2, MTn is a participant terminal.
In this embodiment, a preset video source may be preset for each participant terminal, and the preset video source of each participant terminal is different and can be identified and distinguished. For example, the preset video source may be a human face video, or may be different object videos.
When testing is carried out, each participating terminal sends the corresponding preset video source to the multipoint conference controller, and the multipoint conference controller synthesizes and codes the preset video source of each participating terminal based on the current conference information to obtain a video code stream. It should be noted that the multipoint conference controller may synthesize the preset video sources of all the participating terminals based on the conference information, and the multipoint conference controller may also synthesize the preset video sources of some of the participating terminals based on the conference information.
After the multipoint conference controller obtains the video code stream, the video code stream can be sent to each participating terminal. If there are participant terminals MT1, mt2.. MTn, the video stream transmitting ports in the MCU are: p1, P2.. Pn; selecting the picture synthesis image condition received at the current spot inspection MTn terminal, wherein the port is Pn; i.e. the video stream is obtained from the Pn port.
The target terminal is any one of the participant terminals in the received video code stream.
Specifically, the network firewall can be used to obtain the video code stream. Namely, the video code stream is acquired through an IPtable (network firewall). Specifically, the IPtable may forward the video code stream sent to the target terminal to the detection device.
S102, decoding the video code stream to obtain a composite video.
After the video code stream is obtained, the video code stream can be decoded, and the decoding result is the synthesized video. In particular, the composite video may be embodied as data in a yuv format (a color coding format).
S103, identifying the synthesized video, and determining an actual terminal set of the synthesized video by using the identification result.
After the composite video is obtained, the composite video can be identified, namely, which preset video sources are used for synthesizing the composite video are identified, and then, according to the corresponding relation between the preset video sources and the participant terminals, which videos sent by the participant terminals are used for video synthesis can be determined. Each participant terminal determined based on composite video recognition is considered as an element in the set of actual terminals in the present application. That is, each element in the actual terminal set refers to one terminal participating in the video composition.
Preferably, a human face can be used as a video source, and the video content is identified and perceived by the human face in the video source, so that the requirements of an actual meeting scene can be better met. Specifically, each preset video source is a single face video source, and each preset video source corresponds to one participant terminal; a real terminal set determination process comprising:
step one, carrying out face recognition on the synthesized video to obtain a face recognition result;
and step two, determining an actual terminal set corresponding to the face recognition result according to the corresponding relation among the face, the preset video source and the participant terminal.
For convenience of description, the above two steps will be described in combination.
After the synthetic video is obtained, face recognition can be performed on the synthetic video to obtain a face recognition result. And then, according to the corresponding relation among the three types of faces, a single face video source and the participant terminals, determining an actual terminal set corresponding to the face recognition result.
And S104, judging whether the actual terminal set is consistent with the expected terminal set.
Each element in the expected terminal set refers to a participant terminal, and a preset video source of the participant terminal needs to participate in video synthesis.
The process of obtaining the set of expected terminals may include:
acquiring current conference information by using a remote login service protocol;
and step two, determining an expected terminal set for generating the picture by using the conference information.
For convenience of description, the above two steps will be described in combination.
The current conference information obtained from the multipoint conference controller may be obtained using a telnet protocol. The current conference information may specifically include a port number of the current participant terminal, conference moderator information, and a port number, an ID, and a terminal name of the participant terminal that needs to participate in the picture composition. For example, the conference information includes a desired terminal set corresponding to a terminal name in screen composition, such as set M, including MT1 and mt2.. MTn.
And judging whether the actual terminal set is consistent with the expected terminal set or not, namely judging whether the video synthesis is performed according to the expectation at present or not. For example, the following steps are carried out: if the desired terminal set M includes: MT1, mt2.. MTn; the actual terminal set N includes: MT1, mt2.. MTm; and M corresponds to each element in N one to one, the actual terminal set is determined to be consistent with the expected terminal set, otherwise, the actual terminal set is not consistent with the expected terminal set.
Preferably, in order to improve the determination efficiency, the following steps may be performed to determine whether the actual terminal set is consistent with the expected terminal set, and may specifically include:
step one, judging whether the element quantity of an actual terminal set is the same as that of an expected terminal set;
step two, if not, determining that the actual terminal set is inconsistent with the expected terminal set;
if yes, judging whether each element of the actual terminal set is the same as each element of the expected terminal set;
step four, if all elements are the same, determining that the actual terminal set is consistent with the expected terminal set;
and step five, if different elements exist, determining that the actual terminal set is inconsistent with the expected terminal set.
That is, it may be determined whether the number of elements in the actual terminal set is consistent with that in the expected terminal set, and if the number of elements is not consistent, it may be directly determined that the actual terminal set is not consistent with that in the expected terminal set; and when the number of the elements of the actual terminal set is consistent with that of the expected terminal set, whether the elements in the two sets are in one-to-one correspondence can be further compared. If the terminal sets correspond to each other, determining that the actual terminal set is consistent with the expected terminal set; and if the terminal sets are not in one-to-one correspondence, determining that the actual terminal set is inconsistent with the expected terminal set.
For example, the following steps are carried out: if the desired terminal set M includes MT1, mt2.. MTn; the actual terminal set N includes: MT1, mt2.. MTm; if M and N are not consistent, directly determining that M is not equal to N; if M is equal to N, further judging whether M corresponds to N one by one; if so, determining that the actual terminal set is consistent with the expected terminal set; if not, the actual terminal set is inconsistent with the expected terminal set.
And after a judgment result of whether the actual terminal set is consistent with the expected terminal set is obtained, whether the video synthesis at the target terminal is normal can be determined. Specifically, if the determination result is yes, the operation of step S105 is executed; if the determination result is negative, the operation of step S106 is executed.
And S105, determining that the video synthesis detection result at the target terminal is normal.
When the actual terminal set is consistent with the expected terminal set, i.e. the current video composition is performed as expected, it can be determined that the video composition at the target terminal is normal. The video composition at the target terminal is normal, that is, the composite video obtained by the target terminal is in accordance with the expectation.
S106, determining that the video synthesis detection result at the target terminal is abnormal.
And when the actual terminal set is inconsistent with the expected terminal set, namely the current video synthesis is not performed according to the expectation, determining that the video synthesis at the target terminal is abnormal. The video composition anomaly at the target terminal means that the composite video obtained by the target terminal is not expected. The reason for the non-expectation may be that the target terminal has a problem communicating with the multipoint conference controller, that the multipoint conference controller is synthesized, or that the other participating terminals have a problem communicating with the multipoint conference controller.
Preferably, after determining the video synthesis abnormality at the target terminal, the factor causing the video synthesis abnormality may be further estimated. The specific implementation process comprises the following steps:
acquiring difference information of an actual terminal set and an expected terminal set;
and step two, determining factors causing video synthesis abnormity by using the difference information.
The difference information between the actual terminal set and the expected terminal set can be stored when judging whether the actual terminal set is consistent with the expected terminal set, and the difference information can be directly obtained by reading when tracing the factors causing the abnormal video synthesis. Of course, when the factor causing the video synthesis abnormality can be traced, the difference information can be obtained by comparing the difference between the actual terminal set and the expected terminal set.
The difference information may specifically be a difference in the number of elements and/or a difference in the elements themselves, that is, the difference in the number of elements may be used alone, for example, the number of the participating terminals in the actual terminal set is 2 less than that in the expected terminal set; differences of elements themselves can also be used, such as the name of the terminal participating in the synthesis is wrong when the actual terminal set is compared with the expected terminal set; the number of differences and the names of the terminals of the differences can also be found.
After the difference information is obtained, factors of video synthesis abnormity (such as communication problems and problems of the multipoint conference controller) can be estimated.
For example, when the expected terminal set includes four elements a1, a2, a3 and a4, and the actual terminal set includes three elements a1, a2 and a4, the comparison result is that the element difference is 1, and the element itself is different, that is, the actual terminal set lacks a 3; at this time, the source of the video synthesis exception can be traced based on a3, such as the communication exception between a3 and the multipoint conference controller, and the multipoint conference controller exception can also be performed.
Preferably, in order to improve the detection accuracy, and the precise positioning is a video synthesis abnormal factor. In this embodiment, video composition detection may also be performed for a plurality of participant terminals, respectively. Specifically, after the video synthesis detection result of the target terminal is obtained, the following steps may be further performed:
step one, judging whether a participant terminal to be detected exists or not;
step two, if yes, re-determining a target terminal from the participant terminals to be tested, and executing the step of obtaining the video code stream fed back to the target terminal by the multipoint conference controller;
and step three, if not, counting video synthesis detection results corresponding to all the participant terminals to be detected.
In practical application, each current participant terminal can be used as a participant terminal to be tested. When the video synthesis detection at one participating terminal is finished, the appointed port number can be recorded, and whether the participating terminal to be detected exists can be determined by searching the port number. Or sequentially taking each participating terminal as a target terminal, sequentially executing the steps S101-S105 or S106 to perform video synthesis detection, and then counting the video synthesis detection results at each participating terminal to obtain a statistical result, wherein the statistical result is based on the overall quality of the current picture synthesis of the conference system.
Preferably, in case of a large number of participating terminals, a spot check may be performed to avoid consuming a large amount of resources and time. Specifically, when the target terminal is determined, the participant terminal to be tested is screened out from each current participant terminal. Specifically, according to the actual detection requirement, part of the parameter terminals in the current participating terminals, for example, 60% of the total number of participating terminals, or a specified number, for example, 20, may be randomly sampled and detected.
By applying the method provided by the embodiment of the invention, the video code stream fed back to the target terminal by the multipoint conference controller is obtained; the video code stream is obtained by synthesizing and encoding preset video sources input by each current participating terminal by the multipoint conference controller; decoding the video code stream to obtain a synthesized video; identifying the synthesized video, and determining an actual terminal set of the synthesized video by using an identification result; judging whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; if not, determining that the video synthesis detection result is abnormal.
Each participant terminal corresponds to a preset video source, each participant terminal sends the preset video source to the multipoint conference controller in the testing process, and then the multipoint conference controller carries out picture synthesis based on each preset video source, namely video synthesis and coding feedback to each participant terminal. And acquiring a video code stream in the target terminal, and then decoding to obtain a composite video. Then, the composite video is identified, and the preset video sources sent by the participant terminals can be determined to be used by the composite video. The actual set of terminals for the composite video can be determined. And then judging whether the actual terminal set is consistent with the expected terminal set, if so, indicating that the video synthesis at the target terminal is normal, otherwise, judging that the video synthesis is abnormal. Therefore, in the method, manual interference is not needed, the video synthesis effect can be automatically detected, the detection efficiency can be improved compared with manual detection, and the copying test and the high-capacity test can be realized.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a video composition testing apparatus, and the video composition testing apparatus described below and the video composition testing method described above may be referred to in correspondence with each other.
Referring to fig. 3, the apparatus includes the following modules:
a video code stream obtaining module 101, configured to obtain a video code stream fed back to a target terminal by a multipoint conference controller; the video code stream is obtained by synthesizing and encoding preset video sources input by each current participating terminal by the multipoint conference controller;
the video decoding module 102 is configured to decode a video code stream to obtain a composite video;
the video identification module 103 is used for identifying the synthesized video and determining an actual terminal set of the synthesized video by using the identification result;
a video composition judgment module 104, configured to judge whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; and if not, determining that the video synthesis detection result at the target terminal is abnormal.
By applying the device provided by the embodiment of the invention, the video code stream fed back to the target terminal by the multipoint conference controller is obtained; the video code stream is obtained by synthesizing and encoding preset video sources input by each current participating terminal by the multipoint conference controller; decoding the video code stream to obtain a synthesized video; identifying the synthesized video, and determining an actual terminal set of the synthesized video by using an identification result; judging whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; if not, determining that the video synthesis detection result is abnormal.
Each participant terminal corresponds to a preset video source, each participant terminal sends the preset video source to the multipoint conference controller in the testing process, and then the multipoint conference controller carries out picture synthesis based on each preset video source, namely video synthesis and coding feedback to each participant terminal. And acquiring a video code stream in the target terminal, and then decoding to obtain a composite video. Then, the composite video is identified, and the preset video sources sent by the participant terminals can be determined to be used by the composite video. The actual set of terminals for the composite video can be determined. And then judging whether the actual terminal set is consistent with the expected terminal set, if so, indicating that the video synthesis at the target terminal is normal, otherwise, judging that the video synthesis is abnormal. Therefore, in the device, the video synthesis effect can be automatically detected without manual interference, the detection efficiency can be improved compared with manual detection, and the copying test and the large-capacity test can be realized.
In a specific embodiment of the present invention, the video composition determination module 104 is specifically configured to obtain current meeting information by using a telnet service protocol; and determining an expected terminal set for generating the picture by using the conference information.
In a specific embodiment of the present invention, each preset video source is a single face video source, and each preset video source corresponds to one participant terminal; the video identification module 103 is used for carrying out face identification on the synthesized video to obtain a face identification result; and determining an actual terminal set corresponding to the face recognition result according to the corresponding relation among the face, the single face video source and the participating terminal.
In one embodiment of the present invention, the method further comprises:
the abnormal factor analysis module is used for acquiring difference information of an actual terminal set and an expected terminal set after a video synthesis detection result is acquired; the difference information is used to determine the factors that cause video compositing anomalies.
In an embodiment of the present invention, the video composition determination module 104 is specifically configured to determine whether the number of elements of the actual terminal set is the same as that of the expected terminal set; if not, determining that the actual terminal set is inconsistent with the expected terminal set; if yes, judging whether each element of the actual terminal set is the same as each element of the expected terminal set; if all the elements are the same, determining that the actual terminal set is consistent with the expected terminal set; and if different elements exist, determining that the actual terminal set is inconsistent with the expected terminal set.
In one embodiment of the present invention, the method further comprises:
the detection statistic module is used for judging whether the participant terminal to be detected exists or not after the video synthesis detection result is obtained; if so, re-determining the target terminal from the participant terminals to be tested, and executing the step of obtaining the video code stream fed back to the target terminal by the multipoint conference controller; and if not, counting video synthesis detection results corresponding to each participant terminal to be detected.
In an embodiment of the present invention, the detection statistic module is specifically configured to screen out the participant terminals to be detected from each current participant terminal.
Corresponding to the above method embodiment, the embodiment of the present invention further provides a video composition testing device, and a video composition testing device described below and a video composition testing method described above may be referred to correspondingly.
Referring to fig. 4, the video composition test apparatus includes:
a memory D1 for storing computer programs;
a processor D2, configured to implement the steps of the video composition testing method of the above-described method embodiment when executing the computer program.
Specifically, referring to fig. 5, a specific structural diagram of a video composition testing device provided in this embodiment is shown, where the video composition testing device may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 322 (e.g., one or more processors) and a memory 332, and one or more storage media 330 (e.g., one or more mass storage devices) storing an application 342 or data 344. Memory 332 and storage media 330 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 330 may include one or more modules (not shown), each of which may include a series of instructions operating on a data processing device. Still further, the central processor 322 may be configured to communicate with the storage medium 330 to execute a series of instruction operations in the storage medium 330 on the video composition testing device 301.
The video compositing test apparatus 301 may also include one or more power supplies 326, one or more wired or wireless network interfaces 350, one or more input-output interfaces 358, and/or one or more operating systems 341. Such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps in the video composition test method described above may be implemented by the structure of a video composition test apparatus.
Corresponding to the above method embodiment, the embodiment of the present invention further provides a readable storage medium, and a readable storage medium described below and a video composition testing method described above may be referred to correspondingly.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the video composition testing method of the above-mentioned method embodiment.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Claims (10)

1. A video composition test method, comprising:
s101, acquiring a video code stream fed back to a target terminal by a multipoint conference controller; the video code stream is obtained by the multipoint conference controller synthesizing and encoding preset video sources of all the participating terminals based on conference information, or the multipoint conference controller synthesizing and encoding preset video sources of part of the participating terminals based on the conference information;
s102, decoding the video code stream to obtain a synthesized video;
s103, identifying the synthesized video, and determining an actual terminal set of the synthesized video by using an identification result;
s104, judging whether the actual terminal set is consistent with the expected terminal set;
s105, if yes, determining that the video synthesis detection result at the target terminal is normal;
s106, if not, determining that the video synthesis detection result is abnormal.
2. The video synthesis testing method of claim 1, wherein the process of obtaining the set of expected terminals comprises:
acquiring current conference information by using a remote login service protocol;
and determining the expected terminal set for generating the picture by using the conference information.
3. The video synthesis testing method according to claim 1, wherein each of the preset video sources is a single face video source, and each of the preset video sources corresponds to one of the participant terminals; identifying the composite video, and determining an actual terminal set of the composite video by using the identification result, wherein the steps of:
carrying out face recognition on the synthesized video to obtain a face recognition result;
and determining the actual terminal set corresponding to the face recognition result according to the corresponding relation between the face, the single face video source and the conference participating terminal.
4. The video composition test method according to claim 1, further comprising, after obtaining the video composition detection result:
acquiring difference information of the actual terminal set and the expected terminal set;
the difference information is used to determine the factors that cause video compositing anomalies.
5. The video synthesis testing method of claim 1, wherein determining whether the set of actual terminals is consistent with the set of expected terminals comprises:
judging whether the element quantity of the actual terminal set is the same as that of the expected terminal set;
if not, determining that the actual terminal set is inconsistent with the expected terminal set;
if yes, judging whether each element of the actual terminal set is the same as each element of the expected terminal set;
if all the elements are the same, determining that the actual terminal set is consistent with the expected terminal set; and if different elements exist, determining that the actual terminal set is inconsistent with the expected terminal set.
6. The video composition test method according to claim 1, further comprising, after obtaining the video composition detection result:
judging whether a participant terminal to be detected exists or not;
if so, re-determining a target terminal from the participant terminals to be detected, and sequentially executing the steps S101 to S105 or the step S106 to perform video synthesis detection;
and if not, counting video synthesis detection results corresponding to the participant terminals to be detected.
7. The video synthesis test method according to claim 6, wherein the determination process of the participant terminal to be tested comprises:
and screening the participant terminals to be tested from all the current participant terminals.
8. A video composition test apparatus, comprising:
the video code stream acquisition module is used for acquiring a video code stream fed back to the target terminal by the multipoint conference controller; the video code stream is obtained by the multipoint conference controller synthesizing and encoding preset video sources of all the participating terminals based on conference information, or the multipoint conference controller synthesizing and encoding preset video sources of part of the participating terminals based on the conference information;
the video decoding module is used for decoding the video code stream to obtain a synthesized video;
the video identification module is used for identifying the synthesized video and determining an actual terminal set of the synthesized video by using an identification result;
the video synthesis judging module is used for judging whether the actual terminal set is consistent with the expected terminal set; if so, determining that the video synthesis detection result at the target terminal is normal; and if not, determining that the video synthesis detection result is abnormal.
9. A video compositing test apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the video composition testing method according to any one of claims 1 to 7 when executing said computer program.
10. A readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the video composition testing method according to any one of claims 1 to 7.
CN202010219215.8A 2020-03-25 2020-03-25 Video synthesis testing method, device, equipment and readable storage medium Active CN111372032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010219215.8A CN111372032B (en) 2020-03-25 2020-03-25 Video synthesis testing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010219215.8A CN111372032B (en) 2020-03-25 2020-03-25 Video synthesis testing method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111372032A CN111372032A (en) 2020-07-03
CN111372032B true CN111372032B (en) 2021-04-23

Family

ID=71211251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010219215.8A Active CN111372032B (en) 2020-03-25 2020-03-25 Video synthesis testing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111372032B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114338489B (en) * 2021-12-29 2024-03-15 深圳市捷视飞通科技股份有限公司 Automatic test method, device, equipment and storage medium for multimedia conference system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685443A (en) * 2011-03-17 2012-09-19 Pdk有限公司 System and method for a multipoint video conference
CN106302477A (en) * 2016-08-18 2017-01-04 合网络技术(北京)有限公司 A kind of net cast method of testing and system
US10021348B1 (en) * 2017-07-21 2018-07-10 Lenovo (Singapore) Pte. Ltd. Conferencing system, display method for shared display device, and switching device
CN109600571A (en) * 2018-12-27 2019-04-09 北京真视通科技股份有限公司 Multimedia resource transmission measuring syste and multimedia resource transmission testing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014138280A1 (en) * 2013-03-05 2014-09-12 Vtm, Llc Medical telecommunications system
CN110087018B (en) * 2019-04-02 2020-11-20 福建星网智慧科技股份有限公司 Conference layout testing method and system of video conference system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685443A (en) * 2011-03-17 2012-09-19 Pdk有限公司 System and method for a multipoint video conference
CN106302477A (en) * 2016-08-18 2017-01-04 合网络技术(北京)有限公司 A kind of net cast method of testing and system
US10021348B1 (en) * 2017-07-21 2018-07-10 Lenovo (Singapore) Pte. Ltd. Conferencing system, display method for shared display device, and switching device
CN109600571A (en) * 2018-12-27 2019-04-09 北京真视通科技股份有限公司 Multimedia resource transmission measuring syste and multimedia resource transmission testing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多点视频会议中快速H.264视频合成方法;张伟 等;《小型微型计算机系统》;20120515(第05期);第1062-1067页 *

Also Published As

Publication number Publication date
CN111372032A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN102655585B (en) Video conference system and time delay testing method, device and system thereof
CN111372032B (en) Video synthesis testing method, device, equipment and readable storage medium
CN108270622A (en) A kind of method and system that concurrent emulation testing is carried out to video communication service system
CN107396206B (en) Live broadcast data stream pushing method and system
CN110691238A (en) Video reconstruction quality testing method, device, equipment and readable storage medium
CN110620685A (en) Method and device for reporting device exception
CN110581988B (en) Signal quality detection method and device, electronic equipment and storage medium
CN111817916A (en) Test method, device, equipment and storage medium based on mobile terminal cluster
CN109600571B (en) Multimedia resource transmission test system and multimedia resource transmission test method
CN114338489B (en) Automatic test method, device, equipment and storage medium for multimedia conference system
CN112422956B (en) Data testing system and method
CN113923443A (en) Network video recorder testing method and device and computer readable storage medium
CN114610605A (en) Test method, test device, terminal equipment and storage medium
CN110381308B (en) System for testing live video processing effect
CN110087066B (en) One-key automatic inspection method applied to online inspection
CN111163284B (en) Method, device and equipment for configuring recording and broadcasting host
CN111800665B (en) Method, system, device and readable storage medium for detecting health of device
CN117425001A (en) Automatic detection method and device for video synchronism
CN106730844A (en) A kind of scene run time method of testing and device
CN116320265B (en) Multi-video conference collaborative meeting method and system
CN116437117A (en) Live broadcast full link pressure measurement method and device, electronic equipment and storage medium
CN103686159B (en) A kind of test video signal method and device
CN117255189A (en) Automatic test method and terminal for audio-video conference system
CN112001237A (en) Video monitoring automatic test method and device
CN118283248A (en) Method, device, equipment, terminal and server for detecting screen pattern

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant