CN110798648A - Video conference processing method and system - Google Patents

Video conference processing method and system Download PDF

Info

Publication number
CN110798648A
CN110798648A CN201810878933.9A CN201810878933A CN110798648A CN 110798648 A CN110798648 A CN 110798648A CN 201810878933 A CN201810878933 A CN 201810878933A CN 110798648 A CN110798648 A CN 110798648A
Authority
CN
China
Prior art keywords
video
node server
participant
video network
network node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810878933.9A
Other languages
Chinese (zh)
Inventor
朱紫萱
刘蒙
彭宇龙
韩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201810878933.9A priority Critical patent/CN110798648A/en
Publication of CN110798648A publication Critical patent/CN110798648A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1827Network arrangements for conference optimisation or adaptation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides a video conference processing method and a video conference processing system, wherein the method comprises the following steps: a first video network node server receives a video stream from a video network monitoring device; the method comprises the steps that a first video networking node server carries out face recognition on each frame of video image in a video stream to obtain face feature data; the first video network node server respectively matches the face characteristic data with the face characteristic data of each participant to obtain a matching result corresponding to each participant; and the first video network node server sends the matching result to a second video network node server, and the second video network node server is used for starting the video conference when the matching result meets the preset matching condition. The embodiment of the invention simplifies the operation steps of video conference joining and saves the labor cost and the time cost.

Description

Video conference processing method and system
Technical Field
The invention relates to the technical field of video networking, in particular to a video conference processing method and a video conference processing system.
Background
The video network is a special network for transmitting high-definition video and a special protocol at high speed based on Ethernet hardware, is a higher-level form of the Internet and is a real-time network.
The existing video conference based on the video network can be started only after each participant signs in, when the participants are many, the sign-in operation of each participant needs to consume a large amount of manpower and time, and the conference entering operation of the video conference is very complicated.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a processing method for a video conference and a corresponding processing system for a video conference, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for processing a video conference, where the method is applied to a video network, the video network includes a first video network node server, a second video network node server and a video network monitoring device, and the first video network node server communicates with the second video network node server and the video network monitoring device, respectively, and the method includes: the first video network node server receives a video stream from the video network monitoring equipment; the first video networking node server performs face recognition on each frame of video image in the video stream to obtain face feature data; the first video network node server respectively matches the face characteristic data with the face characteristic data of each participant to obtain a matching result corresponding to each participant; and the first video network node server sends the matching result to the second video network node server, and the second video network node server is used for starting a video conference when the matching result meets a preset matching condition.
Optionally, before the first node server of the internet of view matches the face feature data with the face feature data of each participant respectively to obtain a matching result corresponding to each participant, the method further includes: and the first video network node server determines the face feature data of each participant according to a preset face feature database and the preset related information of each participant in the second video network node server.
Optionally, the determining, by the first node server of the internet of view, the face feature data of each participant according to a preset face feature database and the preset related information of each participant in the second node server of the internet of view includes: the first video network node server searches the related information of each participant in the face feature database to obtain the face feature data of each participant corresponding to the related information of each participant; wherein the related information comprises names of the participants and/or job numbers of the participants.
Optionally, the second node server of the video network is configured to start a video conference when the matching result indicates that the first node server of the video network identifies the face feature data of each participant.
Optionally, the video stream is a video stream acquired by the video networking monitoring device within a preset time range before a preset start time of the video conference.
The embodiment of the invention also discloses a processing system of a video conference, which is applied to the video network, wherein the video network comprises a first video network node server, a second video network node server and video network monitoring equipment, the first video network node server is respectively communicated with the second video network node server and the video network monitoring equipment, and the first video network node server comprises: the receiving module is used for receiving the video stream from the video networking monitoring equipment; the identification module is used for carrying out face identification on each frame of video image in the video stream to obtain face characteristic data; the matching module is used for matching the face feature data with the face feature data of each participant respectively to obtain a matching result corresponding to each participant; and the sending module is used for sending the matching result to the second video network node server, and the second video network node server is used for starting a video conference when the matching result meets a preset matching condition.
Optionally, the first video networking node server further comprises: and the determining module is used for determining the face feature data of each participant according to a preset face feature database and the preset related information of each participant in the second video network node server before the matching module matches the face feature data with the face feature data of each participant respectively to obtain the matching result corresponding to each participant.
Optionally, the determining module is configured to retrieve the relevant information of each participant in the face feature database to obtain face feature data of each participant corresponding to the relevant information of each participant; wherein the related information comprises names of the participants and/or job numbers of the participants.
Optionally, the second node server of the video network is configured to start a video conference when the matching result indicates that the first node server of the video network identifies the face feature data of each participant.
Optionally, the video stream is a video stream acquired by the video networking monitoring device within a preset time range before a preset start time of the video conference.
The embodiment of the invention has the following advantages:
the embodiment of the invention is applied to the video network, wherein the video network comprises a first video network node server, a second video network node server and video network monitoring equipment, wherein the first video network node server is respectively communicated with the second video network node server and the video network monitoring equipment.
In the embodiment of the invention, a first video networking node server receives a video stream from video networking monitoring equipment, and performs face recognition on each frame of video image in the video stream to obtain face feature data. And matching the face characteristic data with the face characteristic data of each participant respectively to obtain a matching result corresponding to each participant. And finally, sending the matching result to a second video network node server. And the second video networking node server is used for starting the video conference when the matching result meets the preset matching condition.
The embodiment of the invention applies the characteristics of the video network, sets the related information of the participants in advance on the second video network node server, and the first video network node server performs face recognition on the video stream acquired by the video network monitoring equipment so as to judge whether the participants exist in the video stream. The second video networking node server may initiate the video conference when one or more participants are present in the video stream. The embodiment of the invention simplifies the operation steps of video conference joining and saves the labor cost and the time cost.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flow chart of steps in a method embodiment of a video conference process of the present invention;
fig. 6 is a block diagram of an embodiment of a video conference processing system according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the internet of vision technology employs network Packet Switching to satisfy the demand of Streaming (which is interpreted as Streaming, continuous broadcasting, and is a data transmission technology that changes received data into a stable continuous stream and continuously transmits the stream, so that the sound heard or image seen by the user is very smooth, and the user can start browsing on the screen before the whole data is transmitted). The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (circled part), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204.
The network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module (downstream network interface module 301, upstream network interface module 302), the switching engine module 303, and the CPU module 304 are mainly included.
Wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) and obtaining the token generated by the code rate control module.
If the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, acquires the ethernet MAC DA of the corresponding terminal according to the video networking destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 3 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (e.g. various protocol packets, multicast data packets, unicast data packets, etc.), there are at most 256 possibilities, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses.
The Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA).
The reserved byte consists of 2 bytes.
The payload part has different lengths according to types of different datagrams, and is 64 bytes if the type of the datagram is a variety of protocol packets, or is 1056 bytes if the type of the datagram is a unicast packet, but is not limited to the above 2 types.
The CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of a Label of Multi-Protocol Label switching (MPLS), and assuming that there are two connections between a device a and a device B, there are 2 labels for a packet from the device a to the device B, and 2 labels for a packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, the first video network node server performs face recognition on the video stream acquired by the video network monitoring equipment to obtain face feature data, further judges whether the face feature data obtained by recognition is matched with the face feature data of the participants, and if so, informs the second video network node server to start the video conference.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of a method for processing a video conference according to the present invention is shown, where the method may be applied to an internet of view, where the internet of view includes a first internet of view node server, a second internet of view node server, and an internet of view monitoring device, and the first internet of view node server communicates with the second internet of view node server and the internet of view monitoring device, respectively, and the method may specifically include the following steps:
in step 501, a first video networking node server receives a video stream from a video networking monitoring device.
In the embodiment of the invention, the first video network node server can be used for storing the face feature database and carrying out face recognition. The video network monitoring equipment can be a camera or a set-top box and the like. The embodiment of the invention does not specifically limit the first node server of the video network and the monitoring equipment of the video network.
In a preferred embodiment of the present invention, the video network monitoring device may be located within the venue of the video conference or outside the venue of the video conference. The video network monitoring equipment does not collect video streams all the time, and in order to reduce the working pressure of the video network monitoring equipment and the data volume of the video streams transmitted to the first video network node server, the working time of the video network monitoring equipment can be preset, namely the working start time and the working end time of the video network monitoring equipment are preset. Generally, the working start time of the video network monitoring device is preset to a certain time point before the preset start time of the video conference, for example, the working start time of the video network monitoring device is preset to a time point 20 minutes before the preset start time of the video conference, such as the preset start time of the video conference is 9: 00, the work starting time of the video networking monitoring equipment is 8: 40. presetting the working end time of the video network monitoring equipment as the preset end time of the video conference, wherein the preset end time of the video conference is 11: 00, the work end time of the video networking monitoring equipment is 11: 00. therefore, the video stream in the embodiment of the present invention is a video stream acquired by the video networking monitoring device within a preset time range before the preset start time of the video conference.
In the embodiment of the present invention, the video networking monitoring device may be controlled by a second video networking node server, and the second video networking node server may be a video conference management server.
In the embodiment of the present invention, the video stream may include a head video stream, a top half video stream, or a whole-body video stream of the person, for example, a front head video stream of the person, a front top half video stream of the person, a front whole-body video stream of the person, and the like, from the content included in the video stream. From the category of the video stream, the video stream may be a video stream composed of a plurality of consecutive video frames in a video sequence, or may also be a composite video stream, and the like.
Step 502, the first video networking node server performs face recognition on each frame of video image in the video stream to obtain face feature data.
In the embodiment of the invention, the first video networking node server can perform face recognition on each frame of video image in the video stream through a built-in neural network model to obtain face feature data. The facial feature data may include feature data of facial keypoints. The keypoint may be one keypoint or a plurality of keypoints. Specifically, the feature data of the key points of the face may be feature vectors, for example, the feature data of the key points of the face may be original feature vectors or processed feature vectors acquired from each frame of image.
Step 503, the first node server of the video network matches the face feature data with the face feature data of each participant respectively to obtain a matching result corresponding to each participant.
In the embodiment of the invention, the first video networking node server carries out face recognition on each frame of video image of the video stream to obtain a plurality of face feature data corresponding to each frame of video image. In practical application, each frame of video image of the video stream does not necessarily contain a human face of a person, that is, the first video networking node server performs face recognition on each frame of video image of the video stream, and some face feature data are actual feature vectors and some face feature data are null in a plurality of face feature data corresponding to each frame of video image. Moreover, the first video network node server can obtain the face feature data of a plurality of persons or the face feature data of one person from the face recognition in one frame of video image.
In a preferred embodiment of the present invention, before the first node server of the video network matches the face feature data with the face feature data of each participant respectively to obtain the matching result corresponding to each participant, the first node server of the video network needs to determine the face feature data of each participant according to a preset face feature database and the related information of each participant preset in the second node server of the video network. The face feature database may be preset in the first video network node server, and the face feature database may include related information of the person, such as name, job number, department, position, and the like, and face feature data corresponding to the related information of the person. It should be noted that, in the related information of the person, the name and/or the job number of the person may be used as a unique identifier of the person, and therefore, the face feature data in the face feature database has a corresponding relationship with the name and/or the job number of the person. The second video networking node server presets the relevant information of each participant, including the name, the job number, the department, the position and the like of the participant, and also presets the relevant information of the video conference, such as the preset starting time, the preset ending time and the like of the video conference.
In practical applications, the face feature database stores the name "P1 n", the job number "P1002" and the face feature data "feature vector PL 1" of the person P1, the name "P2 n", the job number "P2003" and the face feature data "feature vector PL 2" of the person P2, the name "P3 n", the job number "P3004" and the face feature data "feature vector PL 3" of the person P3, and the name "P4 n", the job number "P4005" and the face feature data "feature vector PL 4" of the person P4. The name "P1 n" of the participant P1, the name "P3 n" of the participant P3, the work number "P3004" and the work number "P4005" of the participant P4 are preset on the second video network node server. And the first video network node server acquires the relevant information of the participants from the second video network node server, and retrieves the relevant information of the participants in the face feature database to obtain the face feature data of each participant corresponding to the relevant information of each participant. Namely, the first video network node server retrieves the face feature data 'feature vector PL 1' of the participant P1, the face feature data 'feature vector PL 3' of the participant P3 and the face feature data 'feature vector PL 4' of the participant P4. If the first video network node server performs face recognition on the video stream to obtain face feature data D1, D2, D3, D4 and D5, the first video network node server matches the face feature data D1, D2, D3, D4 and D5 with feature vectors PL1, PL3 and PL4 respectively to obtain matching results J1, J2, J3, J4 and J5 corresponding to the participant P1, matching results J6, J7, J8, J9 and J10 corresponding to the participant P3, and matching results J11, J12, J13, J14 and J15 corresponding to the participant P4. Wherein each matching result may indicate a matching success or a matching failure.
In step 504, the first node server sends the matching result to the second node server.
In the embodiment of the invention, after the second video networking node server receives the matching results, the video conference can be started when the matching results meet the preset matching conditions. The preset matching condition can be that each matching result represents that the first video network node server identifies the face feature data of each participant. Next, as an example, there is at least one matching result that matches successfully among the matching results J1, J2, J3, J4, and J5, there is at least one matching result that matches successfully among the matching results J6, J7, J8, J9, and J10, and there is at least one matching result that matches successfully among the matching results J11, J12, J13, J14, and J15.
It should be noted that the preset matching condition indicates that each participant reaches the meeting place of the video conference, and in practical application, part of key participants among the participants reach the meeting place of the video conference and are considered to meet the preset matching condition.
Moreover, after the second video networking node server determines that the matching result meets the preset matching condition, whether the current time reaches the preset starting time of the video conference can be judged, if yes, reminding information can be generated and displayed, and if the meeting personnel are in place and the video conference starts immediately, the reminding information can be text information and/or voice information, and meanwhile, the video conference is started; if the time does not reach the time limit, reminding information can be generated and displayed, if the participators are in place, the video conference starts after 3 minutes, meanwhile, countdown information for starting the video conference is generated and displayed, and the video conference is started after the countdown is finished. The technical means adopted by the embodiment of the invention for starting the video conference is not particularly limited.
The embodiment of the invention is applied to the video network, wherein the video network comprises a first video network node server, a second video network node server and video network monitoring equipment, wherein the first video network node server is respectively communicated with the second video network node server and the video network monitoring equipment.
In the embodiment of the invention, a first video networking node server receives a video stream from video networking monitoring equipment, and performs face recognition on each frame of video image in the video stream to obtain face feature data. And matching the face characteristic data with the face characteristic data of each participant respectively to obtain a matching result corresponding to each participant. And finally, sending the matching result to a second video network node server. And the second video networking node server is used for starting the video conference when the matching result meets the preset matching condition.
The embodiment of the invention applies the characteristics of the video network, sets the related information of the participants in advance on the second video network node server, and the first video network node server performs face recognition on the video stream acquired by the video network monitoring equipment so as to judge whether the participants exist in the video stream. The second video networking node server may initiate the video conference when one or more participants are present in the video stream. The embodiment of the invention simplifies the operation steps of video conference joining and saves the labor cost and the time cost.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a block diagram of a processing system of a video conference according to an embodiment of the present invention is shown, where the system may be applied to an internet of view, where the internet of view includes a first internet of view node server, a second internet of view node server, and an internet of view monitoring device, and the first internet of view node server communicates with the second internet of view node server and the internet of view monitoring device, respectively, and the first internet of view node server in the system may specifically include the following modules:
the receiving module 601 is configured to receive a video stream from a video networking monitoring device.
The identifying module 602 is configured to perform face identification on each frame of video image in the video stream to obtain face feature data.
And the matching module 603 is configured to match the face feature data with the face feature data of each participant, so as to obtain a matching result corresponding to each participant.
A sending module 604, configured to send the matching result to a second node server of the video networking, where the second node server of the video networking is configured to start the video conference when the matching result meets a preset matching condition.
In a preferred embodiment of the present invention, the first video network node server further comprises: a determining module 605, configured to determine the face feature data of each participant according to a preset face feature database and the preset related information of each participant in the second node server of the video network before the matching module 603 matches the face feature data with the face feature data of each participant respectively to obtain a matching result corresponding to each participant.
In a preferred embodiment of the present invention, the determining module 605 is configured to retrieve the related information of each participant in the face feature database, so as to obtain the face feature data of each participant corresponding to the related information of each participant; wherein the related information comprises names of the participants and/or job numbers of the participants.
In a preferred embodiment of the invention, the second video network node server is used for starting the video conference when the matching result indicates that the first video network node server identifies the face feature data of each participant.
In a preferred embodiment of the present invention, the video stream is a video stream acquired by the video networking monitoring device within a preset time range before the preset start time of the video conference.
For the system embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method for processing a video conference and the system for processing a video conference provided by the present invention are introduced in detail, and a specific example is applied in the text to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A processing method of a video conference is applied to a video network, the video network comprises a first video network node server, a second video network node server and a video network monitoring device, the first video network node server is respectively communicated with the second video network node server and the video network monitoring device, and the method comprises the following steps:
the first video network node server receives a video stream from the video network monitoring equipment;
the first video networking node server performs face recognition on each frame of video image in the video stream to obtain face feature data;
the first video network node server respectively matches the face characteristic data with the face characteristic data of each participant to obtain a matching result corresponding to each participant;
and the first video network node server sends the matching result to the second video network node server, and the second video network node server is used for starting a video conference when the matching result meets a preset matching condition.
2. The method for processing the video conference according to claim 1, wherein before the first node server of the video networking matches the face feature data with the face feature data of each participant respectively to obtain the matching result corresponding to each participant, the method further comprises:
and the first video network node server determines the face feature data of each participant according to a preset face feature database and the preset related information of each participant in the second video network node server.
3. The method for processing the video conference according to claim 2, wherein the determining, by the first node server of the internet of view, the facial feature data of each participant according to the preset facial feature database and the preset related information of each participant in the second node server of the internet of view comprises:
the first video network node server searches the related information of each participant in the face feature database to obtain the face feature data of each participant corresponding to the related information of each participant;
wherein the related information comprises names of the participants and/or job numbers of the participants.
4. The method as claimed in claim 1, wherein the second node server is configured to initiate the video conference when the matching result indicates that the first node server recognizes the face feature data of each participant.
5. The processing method of the video conference as claimed in claim 1, wherein the video stream is a video stream captured by the video networking monitoring device within a preset time range before a preset start time of the video conference.
6. A video conference processing system is applied to a video network, wherein the video network comprises a first video network node server, a second video network node server and a video network monitoring device, the first video network node server is respectively communicated with the second video network node server and the video network monitoring device, and the first video network node server comprises:
the receiving module is used for receiving the video stream from the video networking monitoring equipment;
the identification module is used for carrying out face identification on each frame of video image in the video stream to obtain face characteristic data;
the matching module is used for matching the face feature data with the face feature data of each participant respectively to obtain a matching result corresponding to each participant;
and the sending module is used for sending the matching result to the second video network node server, and the second video network node server is used for starting a video conference when the matching result meets a preset matching condition.
7. The system for processing video conferencing of claim 6, wherein the first video networking node server further comprises:
and the determining module is used for determining the face feature data of each participant according to a preset face feature database and the preset related information of each participant in the second video network node server before the matching module matches the face feature data with the face feature data of each participant respectively to obtain the matching result corresponding to each participant.
8. The system for processing a video conference as claimed in claim 7, wherein said determining module is configured to retrieve the related information of each participant in said face feature database to obtain the face feature data of each participant corresponding to the related information of each participant;
wherein the related information comprises names of the participants and/or job numbers of the participants.
9. The system of claim 6, wherein the second video networking node server is configured to initiate a video conference when the matching result indicates that the first video networking node server identifies face feature data of each participant.
10. The system of claim 6, wherein the video stream is a video stream captured by the video networking monitoring device within a preset time range before a preset start time of the video conference.
CN201810878933.9A 2018-08-03 2018-08-03 Video conference processing method and system Pending CN110798648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810878933.9A CN110798648A (en) 2018-08-03 2018-08-03 Video conference processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810878933.9A CN110798648A (en) 2018-08-03 2018-08-03 Video conference processing method and system

Publications (1)

Publication Number Publication Date
CN110798648A true CN110798648A (en) 2020-02-14

Family

ID=69425793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810878933.9A Pending CN110798648A (en) 2018-08-03 2018-08-03 Video conference processing method and system

Country Status (1)

Country Link
CN (1) CN110798648A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861414A (en) * 2020-07-28 2020-10-30 杭州海康威视数字技术股份有限公司 Conference attendance system, method and equipment
CN111931649A (en) * 2020-08-10 2020-11-13 随锐科技集团股份有限公司 Face recognition method and system in video conference process
CN114245065A (en) * 2021-12-20 2022-03-25 深圳市音络科技有限公司 Positioning tracking method and system for conference system and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201251796Y (en) * 2008-08-29 2009-06-03 中科院成都信息技术有限公司 Signing system based on face recognition
CN102215117A (en) * 2010-04-09 2011-10-12 夏普株式会社 Electronic conferencing system, electronic conference operations method and conference operations terminal
JP2015125480A (en) * 2013-12-25 2015-07-06 東芝テック株式会社 Commodity sales data processor and program
CN106209725A (en) * 2015-04-30 2016-12-07 中国电信股份有限公司 Method, video conference central server and system for video conference certification
CN106228628A (en) * 2016-07-15 2016-12-14 腾讯科技(深圳)有限公司 System, the method and apparatus of registering based on recognition of face
CN107680185A (en) * 2017-09-22 2018-02-09 芜湖星途机器人科技有限公司 The method for using robot register in meeting-place

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201251796Y (en) * 2008-08-29 2009-06-03 中科院成都信息技术有限公司 Signing system based on face recognition
CN102215117A (en) * 2010-04-09 2011-10-12 夏普株式会社 Electronic conferencing system, electronic conference operations method and conference operations terminal
JP2015125480A (en) * 2013-12-25 2015-07-06 東芝テック株式会社 Commodity sales data processor and program
CN106209725A (en) * 2015-04-30 2016-12-07 中国电信股份有限公司 Method, video conference central server and system for video conference certification
CN106228628A (en) * 2016-07-15 2016-12-14 腾讯科技(深圳)有限公司 System, the method and apparatus of registering based on recognition of face
CN107680185A (en) * 2017-09-22 2018-02-09 芜湖星途机器人科技有限公司 The method for using robot register in meeting-place

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861414A (en) * 2020-07-28 2020-10-30 杭州海康威视数字技术股份有限公司 Conference attendance system, method and equipment
CN111861414B (en) * 2020-07-28 2023-09-29 杭州海康威视数字技术股份有限公司 Conference attendance checking system, method and equipment
CN111931649A (en) * 2020-08-10 2020-11-13 随锐科技集团股份有限公司 Face recognition method and system in video conference process
CN114245065A (en) * 2021-12-20 2022-03-25 深圳市音络科技有限公司 Positioning tracking method and system for conference system and electronic equipment

Similar Documents

Publication Publication Date Title
CN108632525B (en) Method and system for processing service
CN110049271B (en) Video networking conference information display method and device
CN109309806B (en) Video conference management method and system
CN110190973B (en) Online state detection method and device
CN109120879B (en) Video conference processing method and system
CN110572607A (en) Video conference method, system and device and storage medium
CN109768963B (en) Conference opening method and system based on video network
CN110049273B (en) Video networking-based conference recording method and transfer server
CN109379254B (en) Network connection detection method and system based on video conference
CN109788235B (en) Video networking-based conference recording information processing method and system
CN109246135B (en) Method and system for acquiring streaming media data
CN109040656B (en) Video conference processing method and system
CN109191808B (en) Alarm method and system based on video network
CN109218306B (en) Audio and video data stream processing method and system
CN110798648A (en) Video conference processing method and system
CN109873864B (en) Communication connection establishing method and system based on video networking
CN109743284B (en) Video processing method and system based on video network
CN109302384B (en) Data processing method and system
CN109005378B (en) Video conference processing method and system
CN110113555B (en) Video conference processing method and system based on video networking
CN110557370B (en) Method, system, electronic equipment and storage medium for pamir synchronization of terminal information
CN110049100B (en) Service data processing method and system
CN110012063B (en) Data packet processing method and system
CN109889516B (en) Method and device for establishing session channel
CN110049069B (en) Data acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214

RJ01 Rejection of invention patent application after publication