CN113727062A - Video conference system and method for processing image data - Google Patents

Video conference system and method for processing image data Download PDF

Info

Publication number
CN113727062A
CN113727062A CN202111280935.6A CN202111280935A CN113727062A CN 113727062 A CN113727062 A CN 113727062A CN 202111280935 A CN202111280935 A CN 202111280935A CN 113727062 A CN113727062 A CN 113727062A
Authority
CN
China
Prior art keywords
conference
image data
characters
important
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111280935.6A
Other languages
Chinese (zh)
Inventor
安佳兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunji Intelligent Information Co ltd
Original Assignee
Shenzhen Yunji Intelligent Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yunji Intelligent Information Co ltd filed Critical Shenzhen Yunji Intelligent Information Co ltd
Priority to CN202111280935.6A priority Critical patent/CN113727062A/en
Publication of CN113727062A publication Critical patent/CN113727062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/403Arrangements for multi-party communication, e.g. for conferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS

Abstract

The invention provides a video conference system and a method for processing image data, which are applied to the technical field of communication; acquiring conference figure image data comprising secondary figure image data and important figure image data, recording the facial features of secondary figures and the facial features of important figures of a conference, performing an iterative algorithm on the image data of the conference figures according to the facial features of the secondary figures and the facial features of the important figures of the conference to obtain reformed image data of the conference figures, respectively listing image data forms corresponding to the facial features of the secondary figures and the facial features of the important figures of the conference from the image data of the secondary figures and the image data of the important figures of the conference, re-matching the image data of the conference figures according to the image data forms of the secondary figures and the important figures of the conference to obtain processed image data; the invention effectively reduces the problem of unclear transmission of the accompanying images in the image conference process by correcting the image data generated in the conference process.

Description

Video conference system and method for processing image data
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a system and a method for processing an image data for a video conference.
Background
With the development of communication technology, the form of a conference gradually develops towards diversification; the existing conference forms comprise not only the traditional single-point conference, but also a multipoint video conference, a multipoint voice conference and the like; the multipoint conference refers to a real-time conference which is established in different physical places in a mode of audio communication, video communication or the like, and usually, different conference participants exist in each physical place; the participants of the current video conference system can synchronously see and hear images and sounds of the participants in other conference places in real time, and can also send electronic documents in real time, thereby greatly reducing the conference cost and compressing the conference time.
However, in the process of the video conference, various ineffectiveness factors are often applied to cause the video image to be jammed or have other problems in the transmission process, so that the whole video conference is inconvenient.
In view of the above, the present invention provides a video conference system and method for processing image data to solve the problem of discontinuous and asynchronous video image data transmission in a video conference.
Disclosure of Invention
The invention aims to solve the problems of discontinuous and asynchronous video image data transmission in a video conference and provides a video conference system and a video conference method for processing image data.
The invention provides an image data processing video conference system, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring conference figure image data comprising secondary figure image data and important figure image data, and the conference figure image data comprising face data; receiving and recording the facial features of the secondary characters of the conference and the facial features of the important characters of the conference; wherein the facial features of the conference secondary characters and the conference important characters comprise facial features of five sense organs;
the matching module is used for carrying out an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the reconstructed image data of the secondary conference characters and the reconstructed image data of the important conference characters; and respectively listing the image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
Further, the obtaining module further comprises:
an obtaining subunit configured to obtain conference person image data including secondary person image data and important person image data, the person image data including face data;
the receiving and recording subunit is used for receiving and recording the facial features of the secondary conference characters and the facial features of the important conference characters; wherein the facial features of the conference secondary characters and the facial features of the conference important characters comprise five-sense-organ features.
Further, the matching module further comprises:
the reforming subunit is used for carrying out an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the reformed image data of the secondary conference characters and the important conference characters;
and the processing subunit is configured to list the image data of the secondary person and the image data of the important conference person in image data tables corresponding to the facial features of the secondary person and the facial features of the important conference person, and re-match the image data of the conference person according to the image data tables of the secondary person and the important conference person to obtain processed image data.
The invention also provides a video conference method for processing image data, which comprises the following steps:
acquiring conference figure image data comprising secondary figure image data and important figure image data, wherein the figure data comprises face data;
receiving the face features of the secondary characters of the conference and the face features of the important characters of the conference; wherein the facial features of the conference secondary characters and the conference important characters comprise facial features of five sense organs;
performing an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the image data of the secondary conference characters and the important conference characters which are completely reformed;
and respectively listing the image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
Further, before the step of acquiring the image data of the conference person, the method further includes:
identifying all the persons participating in the conference according to a preset conference figure form;
acquiring the information of the persons participating in the conference, wherein the information of the persons participating in the conference comprises clothes of the persons, body shapes of the persons and faces of the persons;
and matching and classifying the conference people according to the conference people information, wherein the conference people are conference secondary people and conference important people.
Further, the step of recording the facial features of the person comprises:
performing pixel decomposition on the conference figure image data according to the acquired conference figure image data to obtain image data with optimal pixel mixing but not decomposed;
repeatedly adjusting sub-pixels in all pixels of the image according to the percentage value in the image data of which the optimal pixels are mixed but not decomposed until all pixels in the image are processed;
and after the processing is finished, obtaining a sub-pixel level image with extremely high resolution, and recording the facial features of the people according to the sub-pixel level image.
Further, after the step of recording the facial features of the person, the method further comprises the following steps:
scanning the facial features of the recorded characters to acquire subtle features in the facial features of the recorded characters, wherein the subtle features are unique features of the face of each character;
and correspondingly marking each character image data of the conference according to the unique characteristics of the face of each character, wherein the marking is a method for identifying the double verification identity of each character in the conference.
Further, the step of performing an iterative algorithm on the image data of the secondary conference person and the important conference person further includes:
acquiring the image data of the conference people, correcting and estimating according to the image data of the conference people, and analyzing to obtain parameter values of the image data of the conference people;
and acquiring face data of the conference persons, correcting and estimating according to the face data of the conference persons, and decomposing to obtain parameter values of the face data of the conference persons.
Further, after the step of performing an iterative algorithm on the image data of the secondary conference character and the important conference character, the method further includes:
carrying out data processing on the parameter value of the image data of the conference figure and the parameter value of the face data of the conference figure to obtain the parameter value of the conference figure after the processing is finished; wherein the data processing comprises: data partitioning and data cleaning;
extracting the processed conference figure data according to the processed conference figure parameter values; wherein the characteristic data comprises: changes in human expression;
and matching the processed conference figure parameter values with the processed conference figure data to perform an iterative algorithm, and calculating the number of times of changing the character expression, the total character expression and the time interval of changing the character expression by the iterative algorithm.
Further, the step of matching the image data of the conference person in accordance with the image data form of the conference person includes:
after the image data form records the image data of the conference characters and the face data of the conference characters, scanning the data and the characteristics of the conference characters, and comparing the data and the characteristics with the image data form;
and if the image data or the face data of the conference figure which is not matched with the image data form is scanned, carrying out secondary image data search on the unmatched conference figure, and re-acquiring the image data and the face data of the conference figure which are matched with the image data form.
The invention provides a video conference system and a video conference method for processing image data, which have the following beneficial effects:
1. according to the invention, the problem of unclear transmission of accompanying images in the image conference process is effectively reduced by correcting the image data generated in the conference process;
2. by comparing the image data generated in the conference process, the invention effectively reduces the problem of data errors in the image conference process.
Drawings
FIG. 1 is a block diagram of an embodiment of an image data processing video conferencing system according to the present invention;
FIG. 2 is a flowchart illustrating an embodiment of a method for image data processing video conferencing;
fig. 3 is an algorithm diagram of an iterative algorithm of the image data processing video conference system according to the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention; the implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a video conference system for processing image data according to an embodiment of the present invention includes:
the acquisition module is used for acquiring conference figure image data comprising secondary figure image data and important figure image data, wherein the figure data comprises face data; receiving the face features of the secondary characters of the conference and the face features of the important characters of the conference; wherein the facial features of the conference secondary characters and the conference important characters comprise facial features of five sense organs;
the matching module is used for carrying out an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the reconstructed image data of the secondary conference characters and the reconstructed image data of the important conference characters; and respectively listing the image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
In a specific embodiment: the acquisition module acquires conference figure image data comprising secondary figure image data and important figure image data; receiving the face features of the secondary characters of the conference and the face features of the important characters of the conference; the matching module carries out an iterative algorithm on the image data of the secondary conference figure and the important conference figure according to the face characteristics of the secondary conference figure and the face characteristics of the important conference figure to obtain the reconstructed image data of the secondary conference figure and the important conference figure; respectively listing image data of the secondary people and the image data of the important conference people in image data forms corresponding to the facial features of the secondary people and the facial features of the important conference people, and re-matching image data of the conference people according to the image data forms of the secondary people and the important conference people to obtain processed image data;
the character image data is specifically face data of each character participating in the image conference, the face data is subdivided into face data of a secondary character and face data of an important character participating in the image conference, and the face data is extracted from a preset conference participation table to distinguish the secondary character from the important character, for example: the secretary without the conference certification is a secondary figure, and the director with the conference certification is an important figure;
the conference character face data is specifically the facial features of five sense organs of each character participating in the image conference, the facial features of the characters are divided into the facial features of the characters of the secondary characters and the important characters participating in the image conference in a refined mode, and the five sense organ features of each character are respectively recorded from the identified secondary characters and the important characters of the conference;
the process of performing the iterative algorithm on the conference figure specifically comprises the following steps: carrying out data processing on image data of conference persons and facial features of the persons, wherein the data processing comprises data division and data cleaning; then, extracting the data characteristics of the characters of the conference, and calculating the number of times of character expression change, the total of the character expression and the time interval of character expression change through an iterative algorithm;
the data division is specifically image data of a conference system on secondary people appearing near important people in the conference process, for example: a director transfers data on a secretary beside the director in the process of a conference; performing data division on the secretary which is the secondary person, reserving image data of a board of president, and dividing the image data of the secretary;
the data cleaning specifically includes screening and checking past data in the conference system, and cleaning text and data that do not meet the current situation, for example: if the appearance of a director of an important figure recorded by the conventional system is changed and does not conform to the current situation, cleaning appearance data recorded by the conventional system;
the method comprises the steps of extracting data characteristics of conference characters, namely capturing and acquiring facial expression changes in the conference characters in real time in a conference participation process;
the process of calculating the number of times of change of the character expression, the total sum of the character expression and the time interval of change of the character expression through an iterative algorithm is specifically to match character expression data contained in a conference system with character expression change data newly captured by the conference system, and realize the accumulation of cell values in the system through iterative calculation and circular reference; for example: suppose there are three cells in the system to record the change process of the character's expression, which are X, Y, Z; requiring to record data in X, accumulating all data recorded in X in Y, and accumulating the times of X recording in Z;
assuming that the number of iterative computations is 1, a formula is recorded in a cell of Y
Figure DEST_PATH_IMAGE001
A value of X is added to the value of Y;
the cell entry formula at Z is as follows:
Figure 118908DEST_PATH_IMAGE002
indicating that if X and Z are both 0, the formula returns to 0; if X is 0 but Z is greater than 0, the formula returns the value of Z; if neither of the above conditions is satisfied, accumulating 1 on the basis of the value of Z;
10 is recorded in the cell of X, the cell formula of Y returns to 10, and the cell formula of Z returns to 1, which is expressed as the first accumulation, and the total amount is 10;
40 is recorded in the cell of X, the cell formula of Y returns to 50, the cell formula of Z returns to 2, which represents the second accumulation, and the total amount is 50;
x in the above is the number of times of character expression change recorded in the conference system, Y is the total number of character expressions obtained in the conference system, and Z is the time interval between the character expressions change in the conference system and the real-time capturing of the conference system.
Referring to fig. 2, a video conference method for processing image data according to an embodiment of the present invention includes:
s1: acquiring conference figure image data comprising secondary figure image data and important figure image data, wherein the figure data comprises face data;
s2: receiving the face features of the secondary characters of the conference and the face features of the important characters of the conference; wherein the facial features of the conference secondary characters and the conference important characters comprise facial features of five sense organs;
s3: performing an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the image data of the secondary conference characters and the important conference characters which are completely reformed;
s4: and respectively listing the image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
In a specific embodiment: acquiring conference character image data comprising secondary character image data and important character image data; receiving the face features of the secondary characters of the conference and the face features of the important characters of the conference; carrying out an iterative algorithm on the image data of the secondary conference character and the important conference character according to the face characteristics of the secondary conference character and the face characteristics of the important conference character to obtain the image data of the secondary conference character and the important conference character which are completely reformed; and respectively listing image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
In one embodiment: before the step of obtaining the image data of the conference person, the method further comprises the following steps:
identifying all the persons participating in the conference according to a preset conference figure form;
acquiring the information of the persons participating in the conference, wherein the information of the persons participating in the conference comprises clothes of the persons, body shapes of the persons and faces of the persons;
and matching and classifying the conference people according to the conference people information, wherein the conference people are conference secondary people and conference important people.
In a specific embodiment: capturing all the people participating in the conference, and carrying out matching comparison with preset conference participants according to the clothing, the body shapes and the faces of the people of the conference people so as to judge secondary people and important people in the people participating in the conference; and performs corresponding matching operation and classification according to the identified persons,
the person who does not wear the conference clothes is discriminated as a secondary person, for example: cleaning a sanitary cleaner or finishing a meeting process secretary;
important persons to be discriminated from the person wearing the conference clothes are, for example: a manager wearing meeting clothing and a card or a CEO leading the meeting process.
In one embodiment: the step of recording the facial features of the person comprises the following steps:
performing pixel decomposition on the conference figure image data according to the acquired conference figure image data to obtain image data with optimal pixel mixing but not decomposed;
repeatedly adjusting sub-pixels in all pixels of the image according to the percentage value in the image data of which the optimal pixels are mixed but not decomposed until all pixels in the image are processed;
and after the processing is finished, obtaining a sub-pixel level image with extremely high resolution, and recording the facial features of the people according to the sub-pixel level image.
In a specific embodiment: performing pixel decomposition on the acquired image data of the conference figure to obtain image data with optimal pixel mixing but not decomposed; repeatedly adjusting the sub-pixels of all the pixels in the image data according to the percentage value contained in the image data which is mixed with the optimal pixels but not decomposed until all the pixels in the image are processed, obtaining a sub-pixel level image with extremely high resolution, and recording the facial features of the people;
the specific process of pixel decomposition is to take a coordinate center according to each pixel point captured in the figure image data and divide pixel point data of other coordinates taking the coordinates as the center;
adjusting the sub-pixels of the image data according to the percentage value contained in the image data mixed with the optimal pixels but not decomposed, wherein the specific process is to correspondingly adjust the X-axis or Y-axis coordinate data in the coordinates according to the coordinate data contained in each pixel point data until all the pixels in the image can form complete pixel image data;
in one embodiment: after the step of recording the facial features of the person, the method further comprises the following steps:
scanning the facial features of the recorded characters to acquire subtle features in the facial features of the recorded characters, wherein the subtle features are unique features of the face of each character;
and correspondingly marking each character image data of the conference according to the unique characteristics of the face of each character, wherein the marking is a method for identifying the double verification identity of each character in the conference.
In a specific embodiment: the system scans the recorded facial features of the persons, captures the unique facial features of each person in the facial features of the persons, and identifies the unique facial features of each person in the conference system, and the identification can be used as a verification means for distinguishing the secondary persons from the important persons in the conference;
the subtle features of capturing the faces of the persons are specifically different face data of each person, such as: acne marks, scars and birthmarks on human faces.
In one embodiment: before the step of performing the iterative algorithm on the image data of the conference people, the method further comprises the following steps of:
acquiring the image data of the conference people, correcting and estimating according to the image data of the conference people, and analyzing to obtain parameter values of the image data of the conference people;
and acquiring face data of the conference persons, correcting and estimating according to the face data of the conference persons, and decomposing to obtain parameter values of the face data of the conference persons.
In a specific embodiment: acquiring the image data and the face data of the conference persons, carrying out correction estimation on the image data and the face data of the conference persons, acquiring a corrected threshold range as a correction estimation value, and taking the optimal parameter value in the threshold range as the parameter value of the image data and the face data of the conference persons;
the correction estimation specifically refers to taking a threshold range for similar data in the two data with reference to the image data of the conference person and the face data of the conference person, for example: if the unique characteristics of the pixel data of the image data of the conference persons of a director and the face data of the conference persons exist simultaneously, the image data of the conference persons and the threshold value of the face data of the conference persons can be selected; an optimum value is taken as a value which is closest to and similar to the threshold values of the image data of the conference person and the face data of the conference person in the threshold values of the image data of the conference person and the face data of the conference person, and the optimum value is taken as a parameter value of the image data of the conference person and the face data of the conference person.
Referring to fig. 3, for an iterative algorithm of a conference system in an embodiment of the present invention,
in one embodiment: the step of performing an iterative algorithm on the image data of the conference person includes:
carrying out data processing on the parameter values of the image data of the conference figures and the parameter values of the face data of the conference figures to obtain processed parameter values of the data of the conference figures; wherein the data processing comprises: data partitioning and data cleaning;
extracting the processed data characteristics of the conference people according to the processed data parameter values of the conference people; wherein the characteristic data comprises: changes in human expression;
and matching the processed conference figure parameter values with the processed conference figure data to perform an iterative algorithm, and calculating the number of times of changing the character expression, the total character expression and the time interval of changing the character expression by the iterative algorithm.
In a specific embodiment: the process of performing the iterative algorithm on the conference figure specifically comprises the following steps: carrying out data processing on image data of conference persons and facial features of the persons, wherein the data processing comprises data division and data cleaning; then, extracting the data characteristics of the characters of the conference, and calculating the number of times of character expression change, the total of the character expression and the time interval of character expression change through an iterative algorithm;
the data division is specifically image data of a conference system on secondary people appearing near important people in the conference process, for example: a director transfers data on a secretary beside the director in the process of a conference; performing data division on the secretary which is the secondary person, reserving image data of a board of president, and dividing the image data of the secretary;
the data cleaning specifically includes screening and checking past data in the conference system, and cleaning text and data that do not meet the current situation, for example: if the appearance of a director of an important figure recorded by the conventional system is changed and does not conform to the current situation, cleaning appearance data recorded by the conventional system;
the method comprises the steps of extracting data characteristics of conference characters, namely capturing and acquiring facial expression changes in the conference characters in real time in a conference participation process;
the process of calculating the number of times of change of the character expression, the total sum of the character expression and the time interval of change of the character expression through an iterative algorithm is specifically to match character expression data contained in a conference system with character expression change data newly captured by the conference system, and realize the accumulation of cell values in the system through iterative calculation and circular reference; for example: suppose there are three cells in the system to record the change process of the character's expression, which are X, Y, Z; requiring to record data in X, accumulating all data recorded in X in Y, and accumulating the times of X recording in Z;
assuming that the number of iterative computations is 1, a formula is recorded in a cell of Y
Figure 770469DEST_PATH_IMAGE001
A value of X is added to the value of Y;
the cell entry formula at Z is as follows:
Figure 550206DEST_PATH_IMAGE002
indicating that if X and Z are both 0, the formula returns to 0; if X is 0 but Z is greater than 0, the formula returns the value of Z; if neither of the above conditions is satisfied, accumulating 1 on the basis of the value of Z;
10 is recorded in the cell of X, the cell formula of Y returns to 10, and the cell formula of Z returns to 1, which is expressed as the first accumulation, and the total amount is 10;
40 is recorded in the cell of X, the cell formula of Y returns to 50, the cell formula of Z returns to 2, which represents the second accumulation, and the total amount is 50;
x in the above is the number of times of character expression change recorded in the conference system, Y is the total number of character expressions obtained in the conference system, and Z is the time interval between the character expressions change in the conference system and the real-time capturing of the conference system.
In one embodiment: the step of matching the image data of the conference person in accordance with the image data form of the conference person includes:
after the image data form records the image data of the conference characters and the face data of the conference characters, scanning the data of the conference characters and comparing the data with the image data form;
and if the image data or the face data of the conference figure which is not matched with the image data form is scanned, carrying out secondary image data search on the unmatched conference figure, and re-acquiring the image data and the face data of the conference figure which are matched with the image data form.
In a specific embodiment: after the conference system finishes recording the image data and the face data of the conference characters, scanning each piece of conference character data independently and comparing the scanned conference character data with the data recorded in the image data form; if the scanning result contains data which are not consistent with the conference figure data, carrying out secondary image data searching on the conference figure again, and obtaining the conference figure image data and the conference figure face data which are matched with the image data form again;
the process of searching the unmatched conference figure for the secondary image data specifically includes that the system acquires the image data and the face data of the conference figure again and matches the image data form again.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. An image data processing video conferencing system, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring conference figure image data comprising secondary figure image data and important figure image data, and the conference figure image data comprising face data; receiving and recording the facial features of the secondary characters of the conference and the facial features of the important characters of the conference; wherein the facial features of the conference secondary characters and the conference important characters comprise facial features of five sense organs;
the matching module is used for carrying out an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the reconstructed image data of the secondary conference characters and the reconstructed image data of the important conference characters; and respectively listing the image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
2. The image data processing video conferencing system of claim 1, wherein the acquisition module further comprises:
an obtaining subunit configured to obtain conference person image data including secondary person image data and important person image data, the person image data including face data;
the receiving and recording subunit is used for receiving and recording the facial features of the secondary conference characters and the facial features of the important conference characters; wherein the facial features of the conference secondary characters and the facial features of the conference important characters comprise five-sense-organ features.
3. The image data processing video conferencing system of claim 1, wherein the matching module further comprises:
the reforming subunit is used for carrying out an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the reformed image data of the secondary conference characters and the important conference characters;
and the processing subunit is configured to list the image data of the secondary person and the image data of the important conference person in image data tables corresponding to the facial features of the secondary person and the facial features of the important conference person, and re-match the image data of the conference person according to the image data tables of the secondary person and the important conference person to obtain processed image data.
4. An image data processing video conference method, wherein the image data processing video conference system according to claims 1 to 3 is executed by using the image data processing video conference method, and the method comprises:
acquiring conference figure image data comprising secondary figure image data and important figure image data, wherein the figure data comprises face data;
receiving the face features of the secondary characters of the conference and the face features of the important characters of the conference; wherein the facial features of the conference secondary characters and the conference important characters comprise facial features of five sense organs;
performing an iterative algorithm on the image data of the secondary conference characters and the important conference characters according to the facial features of the secondary conference characters and the facial features of the important conference characters to obtain the image data of the secondary conference characters and the important conference characters which are completely reformed;
and respectively listing the image data of the secondary person and the image data of the important conference person in image data forms corresponding to the facial features of the secondary person and the important conference person, and re-matching the image data of the conference person according to the image data forms of the secondary person and the important conference person to obtain processed image data.
5. The image data processing video conference method according to claim 4, wherein said step of obtaining image data of a conference person further comprises:
identifying all the persons participating in the conference according to a preset conference figure form;
acquiring the information of the persons participating in the conference, wherein the information of the persons participating in the conference comprises clothes of the persons, body shapes of the persons and faces of the persons;
and matching and classifying the conference people according to the conference people information, wherein the conference people are conference secondary people and conference important people.
6. The image data processing video conferencing method of claim 4, wherein the step of including facial features of the persons comprises:
performing pixel decomposition on the conference figure image data according to the acquired conference figure image data to obtain image data with optimal pixel mixing but not decomposed;
repeatedly adjusting sub-pixels in all pixels of the image according to the percentage value in the image data of which the optimal pixels are mixed but not decomposed until all pixels in the image are processed;
and after the processing is finished, obtaining a sub-pixel level image with extremely high resolution, and recording the facial features of the people according to the sub-pixel level image.
7. The image data processing video conferencing method of claim 5, wherein the step of receiving facial features of the persons further comprises:
scanning the facial features of the recorded characters to acquire subtle features in the facial features of the recorded characters, wherein the subtle features are unique features of the face of each character;
and correspondingly marking each character image data of the conference according to the unique characteristics of the face of each character, wherein the marking is a method for identifying the double verification identity of each character in the conference.
8. The image data processing video conferencing method of claim 4, wherein the step of performing an iterative algorithm on the image data of the conference secondary person and the conference important person further comprises:
acquiring the image data of the conference people, correcting and estimating according to the image data of the conference people, and analyzing to obtain parameter values of the image data of the conference people;
and acquiring face data of the conference persons, correcting and estimating according to the face data of the conference persons, and decomposing to obtain parameter values of the face data of the conference persons.
9. The image data processing video conferencing method of claim 8, wherein the step of performing an iterative algorithm on the image data of the secondary conference character and the important conference character further comprises:
carrying out data processing on the parameter values of the image data of the conference figures and the parameter values of the face data of the conference figures to obtain processed parameter values of the data of the conference figures; wherein the data processing comprises: data partitioning and data cleaning;
extracting the processed data characteristics of the conference people according to the processed data parameter values of the conference people; wherein the characteristic data comprises: changes in human expression;
and matching the processed conference figure parameter values with the processed conference figure data to perform an iterative algorithm, and calculating the number of times of changing the character expression, the total character expression and the time interval of changing the character expression by the iterative algorithm.
10. The image data processing video conferencing method of claim 4, wherein the step of matching the image data of the conference person according to the image data form of the conference person comprises:
after the image data form records the image data of the conference characters and the face data of the conference characters, scanning the data and the characteristics of the conference characters, and comparing the data and the characteristics with the image data form;
and if the image data or the face data of the conference figure which is not matched with the image data form is scanned, carrying out secondary image data search on the unmatched conference figure, and re-acquiring the image data and the face data of the conference figure which are matched with the image data form.
CN202111280935.6A 2021-11-01 2021-11-01 Video conference system and method for processing image data Pending CN113727062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111280935.6A CN113727062A (en) 2021-11-01 2021-11-01 Video conference system and method for processing image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111280935.6A CN113727062A (en) 2021-11-01 2021-11-01 Video conference system and method for processing image data

Publications (1)

Publication Number Publication Date
CN113727062A true CN113727062A (en) 2021-11-30

Family

ID=78686314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111280935.6A Pending CN113727062A (en) 2021-11-01 2021-11-01 Video conference system and method for processing image data

Country Status (1)

Country Link
CN (1) CN113727062A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376517A (en) * 2015-12-01 2016-03-02 兴天通讯技术有限公司 Smart television for video conferences
CN108174141A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 A kind of method of video communication and a kind of mobile device
CN110288306A (en) * 2019-05-09 2019-09-27 广东博智林机器人有限公司 Conferencing information acquisition methods, device, computer equipment and storage medium
US20210195142A1 (en) * 2019-05-09 2021-06-24 Present Communications, Inc. Video conferencing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376517A (en) * 2015-12-01 2016-03-02 兴天通讯技术有限公司 Smart television for video conferences
CN108174141A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 A kind of method of video communication and a kind of mobile device
CN110288306A (en) * 2019-05-09 2019-09-27 广东博智林机器人有限公司 Conferencing information acquisition methods, device, computer equipment and storage medium
US20210195142A1 (en) * 2019-05-09 2021-06-24 Present Communications, Inc. Video conferencing method

Similar Documents

Publication Publication Date Title
WO2019000777A1 (en) Internet-based face beautification system
CN108229427A (en) A kind of identity-based certificate and the identity security verification method and system of recognition of face
CN112215043A (en) Human face living body detection method
CN108875623A (en) A kind of face identification method based on multi-features correlation technique
CN113515988A (en) Palm print recognition method, feature extraction model training method, device and medium
CN112395461A (en) Business meeting intelligent management system based on big data analysis
CN110245573A (en) A kind of register method, apparatus and terminal device based on recognition of face
US8879805B2 (en) Automated image identification method
CN113727062A (en) Video conference system and method for processing image data
US20020023952A1 (en) Card exchanging device, card exchanging method, and recording media
CN115937971A (en) Hand-raising voting identification method and device
CN113221606B (en) Face recognition method based on IMS video conference login
US11908235B2 (en) Method and device of registering face based on video data, and electronic whiteboard
CN113055194B (en) Cloud conference box rapid conference entering method, cloud conference box and readable storage medium
CN112887656A (en) Multi-person online conference system based on virtual reality
CN111242189B (en) Feature extraction method and device and terminal equipment
Bigün et al. Audio-and Video-based Biometric Person Authentication: First International Conference, AVBPA'97, Crans-Montana, Switzerland, March 12-14, 1997, Proceedings
CN112270218A (en) Method and system for automatically identifying fingerprint
CN111626181A (en) Face recognition big data analysis method
CN110598531A (en) Method and system for recognizing electronic seal based on face of mobile terminal
CN117422617B (en) Method and system for realizing image stitching of video conference system
CN113128334A (en) AI intelligent image recognition method, electronic equipment and storage medium
JP7110669B2 (en) Video conferencing system, video conferencing method, and program
Mishra et al. A neoteric approach for ear biometrics using multilinear PCA
Jyoti et al. A robust, low-cost approach to Face Detection and Face Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211130