CN114025217A - Image display method, equipment and storage medium - Google Patents

Image display method, equipment and storage medium Download PDF

Info

Publication number
CN114025217A
CN114025217A CN202010680176.1A CN202010680176A CN114025217A CN 114025217 A CN114025217 A CN 114025217A CN 202010680176 A CN202010680176 A CN 202010680176A CN 114025217 A CN114025217 A CN 114025217A
Authority
CN
China
Prior art keywords
image
electronic whiteboard
shared
terminal device
shooting object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010680176.1A
Other languages
Chinese (zh)
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010680176.1A priority Critical patent/CN114025217A/en
Publication of CN114025217A publication Critical patent/CN114025217A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application discloses an image display method, equipment and a storage medium, a user does not need to watch video images and shared contents back and forth on two display screens, clearly watches the shared contents on an electronic whiteboard, and can watch real images of a conference speaker in a shared conference on the same display screen. The method comprises the following steps: at least one first terminal device receives a video media stream and an electronic whiteboard shared stream sent by a server, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared contents on an electronic whiteboard; the first terminal equipment generates a first image based on the video media stream and the electronic whiteboard shared stream, and the first image reflects the first shooting object and the second shooting object on a first display screen; the first terminal device displays a first image on a first display screen.

Description

Image display method, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image display method, image display equipment and a storage medium.
Background
At present, communication of a conference terminal is mainly video and audio communication, but with the development of communication technology, people's requirements for video and audio communication of a conference are not limited to video and audio, and communication and interaction on text information become more and more important.
After the electronic whiteboard is pushed out, people can conveniently use the electronic whiteboard to hold a sharing conference. For example, please refer to fig. 1, which is a schematic view of a scene of a conference held by an electronic whiteboard provided in the prior art. As can be seen from fig. 1, the conference terminal 2, and the conference terminal 3 are added to the same video conference, a speaker gives a speech in front of the electronic whiteboard, and writes a book on the electronic whiteboard or puts a lecture draft on the electronic whiteboard, etc., at this time, the shared content on the electronic whiteboard and the speaker can be photographed by using an external camera, the conference terminal 1 can send the shared content on the electronic whiteboard and video images of the speaker, etc. to the conference cloud server, and at this time, the conference cloud server forwards the shared content and the video images of the speaker, etc. to the conference terminal 2 and the conference terminal 3. In the conference terminal 2, the shared content and the video image are displayed using two display screens, such as: the video image is displayed using the display screen 1, and the shared content is displayed using the display screen 2. In the conference terminal 3, since the conference terminal 3 has only one display screen such as the display screen 3, the user usually displays the shared content on the display screen 3 and the lecturer video content of the lecturer in a small window nested in the upper right corner of the display screen 3.
However, the two display screens are used to display the shared content and the video image, and the user corresponding to the conference terminal 2 needs to continuously shift the line of sight back and forth between the two display screens, which greatly affects the viewing experience of the user. The method that a single display screen is adopted to display the shared content, and the video content of the speaker is nested on the display screen, so that if a user selects to watch the shared content in the electronic whiteboard, the video image of the speaker can be only seen in a small window, the content of writing on the blackboard or screen projection of the display screen is easily shielded, and the user experience is influenced; if the video in the small window is selected to be watched, since the shared content in the electronic whiteboard is played in the video stream, the high-definition display effect cannot be ensured at this time, and the user experience is affected.
Disclosure of Invention
The embodiment of the application provides an image display method, equipment and a storage medium, which can realize that a user can watch a first image on the same first display screen without watching video images and shared contents back and forth on two display screens, the shared contents are not easy to be blocked, the shared contents on an electronic whiteboard can be watched more clearly, and real images of a conference speaker in a shared conference can be watched on the same display screen.
In a first aspect, an embodiment of the present application provides a method for displaying an image, where the method may include:
at least one first terminal device receives a video media stream and an electronic whiteboard shared stream sent by a server, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, the second shooting object reflects shared content on an electronic whiteboard, and each first terminal device in the at least one first terminal device is connected to the shared conference;
each first terminal device generates a first image based on the video media stream and the electronic whiteboard shared stream, wherein the first image reflects that the first shooting object and the second shooting object are on a first display screen;
each first terminal device displays the first image on the first display screen.
Optionally, with reference to the first aspect, in a first possible implementation manner, each of the first terminal devices generates a first image based on the video media stream and the electronic whiteboard shared stream, where the generating includes:
the first terminal equipment obtains a video image based on the video media stream and obtains an electronic whiteboard shared image based on the electronic whiteboard shared stream;
the first terminal equipment obtains a first shooting image corresponding to the first shooting object based on the video image and the electronic whiteboard shared image;
and the first terminal equipment superposes the first shot image and the electronic whiteboard shared image to generate a first image.
Optionally, with reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, the obtaining, by the first terminal device, a first captured image corresponding to the first captured object based on the video image and the electronic whiteboard shared image includes:
the first terminal equipment performs ashing treatment on the video image to obtain a first ashed image, and performs ashing treatment on the white board shared image to obtain a second ashed image;
the first terminal equipment processes the first ashing image and the second ashing image based on a preset background subtraction algorithm to obtain a third ashing image, wherein the third ashing image does not contain the second shooting object;
the first terminal device determines shooting position information of the first shooting object based on the third ashed image;
the first terminal device extracts a first shot image corresponding to the first shot object from the video image based on the shooting position information.
Optionally, with reference to the second possible implementation manner of the first aspect, in a third possible implementation manner, the determining, by the first terminal device, shooting position information of the first shooting object in the video image based on the third ashed image includes:
the first terminal equipment carries out binarization processing on the third ashing image to obtain a second image;
the first terminal equipment carries out noise reduction processing on the second image to obtain a third image;
the first terminal device processes the third image based on a preset boundary finding algorithm to obtain shooting position information of the first shooting object in the video image;
correspondingly, the acquiring, by the first terminal device, a first shot image corresponding to the first shot object from the video image based on the shooting position information includes:
the first terminal device determines an image area of the first photographic subject in the video image based on the photographic position information;
and the first terminal equipment intercepts the image area from the video image to obtain a first shot image corresponding to the first shot object.
Optionally, with reference to the first aspect to the third possible implementation manner of the first aspect, in a fourth possible implementation manner, the method further includes:
and the first terminal equipment displays the electronic whiteboard shared image on a second display screen, wherein the second display screen and the first display screen are on the same first terminal equipment.
Optionally, with reference to the first aspect to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner, before the first terminal device obtains the first captured image corresponding to the first captured object based on the video image and the electronic whiteboard shared image, the method further includes:
the first terminal device cuts out a first area in the video image, the first area comprises the first shooting object and the second shooting object, and the area size of the first area is equal to the size of the electronic whiteboard shared image.
In a second aspect, an embodiment of the present application provides a method for displaying an image, where the method may include:
the server receives a video media stream and an electronic whiteboard shared stream sent by second terminal equipment, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared contents on the electronic whiteboard;
the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, so that each first terminal device of the at least one first terminal device generates a first image, and displays the first image on a first display screen, wherein the first image reflects that the first shooting object and the second shooting object are on the first display screen, and the second terminal device and each first terminal device of the at least one terminal device are connected to the shared conference.
In a third aspect, an embodiment of the present application provides a method for displaying an image, where the method may include:
the method comprises the steps that a second terminal device obtains a video image and an electronic whiteboard shared image, wherein the video image comprises a first shooting object and a second shooting object, the electronic whiteboard shared image comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared contents on the electronic whiteboard; the second terminal equipment generates a video media stream corresponding to the video image and an electronic whiteboard shared stream corresponding to the electronic whiteboard shared image;
the second terminal device sends the video media stream and the electronic whiteboard shared stream to a server, so that the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, wherein the second terminal device and each first terminal device in the at least one terminal device are connected to the sharing conference.
Optionally, with reference to the third aspect, in a first possible implementation manner, the acquiring, by the second terminal device, a video image and an electronic whiteboard shared image includes:
and the second terminal equipment receives the video image sent by the camera equipment and receives the electronic whiteboard shared image sent by the electronic whiteboard.
In a fourth aspect, an embodiment of the present application provides a first terminal device, where the first terminal device may include:
the system comprises a first receiving unit, a second receiving unit and a third receiving unit, wherein the first receiving unit is used for receiving a video media stream and an electronic whiteboard shared stream which are sent by a server, the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, the second shooting object reflects shared content on an electronic whiteboard, and each first terminal device in at least one first terminal device is connected to the shared conference;
the processing unit is used for generating a first image according to the video media stream and the electronic whiteboard shared stream, wherein the first image reflects the first shooting object and the second shooting object on a first display screen;
a display unit for displaying the first image on the first display screen.
Optionally, with reference to the fourth aspect, in a first possible implementation manner, the processing unit includes:
the obtaining module is used for obtaining a video image according to the video media stream and obtaining an electronic whiteboard shared image based on the electronic whiteboard shared stream;
the obtaining module is used for obtaining a first shooting image corresponding to the first shooting object according to the video image and the electronic whiteboard shared image;
and the generating module is used for superposing the first shooting image and the electronic whiteboard shared image to generate a first image.
Optionally, with reference to the first possible implementation manner of the fourth aspect, in a second possible implementation manner, the obtaining module includes:
the obtaining submodule is used for carrying out ashing treatment on the video image to obtain a first ashing image and carrying out ashing treatment on the white board shared image to obtain a second ashing image;
the obtaining submodule is used for processing the first ashing image and the second ashing image based on a preset background subtraction algorithm to obtain a third ashing image, wherein the third ashing image does not contain the second shooting object;
a determining submodule for determining the shooting position information of the first shooting object according to the third ashed image;
and the extraction sub-module is used for extracting a first shooting image corresponding to the first shooting object from the video image according to the shooting position information.
Optionally, with reference to the second possible implementation manner of the fourth aspect, in a third possible implementation manner, the determining submodule is configured to:
performing binarization processing on the third ashed image to obtain a second image;
carrying out noise reduction processing on the second image to obtain a third image;
processing the third image based on a preset boundary finding algorithm to obtain shooting position information of the first shooting object in the video image;
correspondingly, the extraction submodule is configured to:
determining an image area of the first shooting object in the video image according to the shooting position information;
and intercepting the image area from the video image to obtain a first shot image corresponding to the first shot object.
Optionally, with reference to the first aspect to the third possible implementation manner of the fourth aspect, in a fourth possible implementation manner, the display unit is further configured to display the electronic whiteboard shared image on a second display screen, where the second display screen and the first display screen are on the same first terminal device.
Optionally, with reference to the first aspect to the fourth possible implementation manner of the fourth aspect, in a fifth possible implementation manner, the processing unit is further configured to, before obtaining a first captured image corresponding to the first captured object based on the video image and the electronic whiteboard shared image, intercept a first area in the video image, where the first area includes the first captured object and the second captured object, and a size of the first area is equal to a size of the electronic whiteboard shared image.
In a fifth aspect, an embodiment of the present application provides a server, where the server may include:
the second receiving unit is used for receiving a video media stream and an electronic whiteboard shared stream sent by second terminal equipment, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared content on the electronic whiteboard;
a first sending unit, configured to send the video media stream and the electronic whiteboard shared stream to at least one first terminal device, so that each of the at least one first terminal device generates a first image, and displays the first image on a first display screen, where the first image reflects that the first photographic object and the second photographic object are on the first display screen, and the second terminal device and each of the at least one terminal device are connected to the shared conference.
In a sixth aspect, an embodiment of the present application provides a second terminal device, where the second terminal device may include: the electronic whiteboard shared image comprises a second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared content on the electronic whiteboard;
the generating unit is used for generating a video media stream corresponding to the video image and an electronic whiteboard shared stream corresponding to the electronic whiteboard shared image;
a second sending unit, configured to send the video media stream and the electronic whiteboard shared stream to a server, so that the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, where the second terminal device and each of the at least one terminal device are connected to the sharing conference.
Optionally, with reference to the sixth aspect, in a first possible implementation manner, the obtaining unit includes:
and the acquisition module is used for receiving the video image sent by the camera equipment and receiving the electronic white board shared image sent by the electronic white board.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method according to the first aspect or any one of the possible implementation manners of the first aspect.
In an eighth aspect, embodiments of the present application provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method as possible implementation manner of the second aspect.
In a ninth aspect, embodiments of the present application provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform a method according to any one of the possible implementation manners of the third aspect or the third aspect.
In a tenth aspect, embodiments of the present application provide a computer program product containing instructions that, when executed on a computer, cause the computer to perform a method according to the first aspect or any one of the possible implementations of the first aspect.
In an eleventh aspect, embodiments of the present application provide a computer program product containing instructions that, when executed on a computer, cause the computer to perform a method as possible implementation manner of the second aspect.
In a twelfth aspect, embodiments of the present application provide a computer program product containing instructions that, when executed on a computer, cause the computer to perform the method according to any one of the possible implementation manners of the third aspect or the third aspect.
In a thirteenth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, and is configured to support a first terminal device to implement the functions related to the first aspect or any one of the possible implementation manners of the first aspect. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the information generating device. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
In a fourteenth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, and is configured to support a server to implement the functions in the possible implementation manners of the second aspect. In one possible design, the chip system further includes a memory for storing program instructions and data necessary for the information generating device. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
In a fifteenth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, configured to support a second terminal device to implement the functions in the third aspect or any one of the possible implementation manners of the third aspect. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the information generating device. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, because the video media stream comprises the first shooting object and the second shooting object, and the electronic whiteboard shared stream comprises the second shooting object, the first image is generated by the video media stream and the electronic whiteboard shared stream, so that the first image can reflect the first shooting object and the second shooting object on the first display screen, and thus the first terminal can display the first image on the same first display screen, the video image and the shared content do not need to be watched back and forth by a user on the two display screens, and the shared content is not easy to be blocked, so that the user can watch the shared content on the electronic whiteboard more clearly, and the real image of a conference speaker in a shared conference can be watched on the same display screen.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present application.
FIG. 1 is a diagram of an application scenario provided by the prior art;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 3 is a system architecture diagram provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of a method for displaying an image according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another embodiment of a method for displaying an image provided by an embodiment of the present application;
fig. 6a is a schematic diagram of an identifying electronic whiteboard provided in an embodiment of the present application;
FIG. 6b is a schematic diagram of a first region being cut from a video image as provided in an embodiment of the present application;
FIG. 6c is a schematic diagram of the ashing process for a video image provided in the embodiments of the present application;
fig. 6d is a schematic diagram of ashing processing of the electronic whiteboard shared image provided in the embodiment of the present application;
FIG. 6e is a schematic diagram of background subtraction provided by embodiments of the present application;
FIG. 6f is a schematic diagram illustrating noise reduction processing performed on the third ashed image as provided in the embodiment of the present application;
fig. 6g is a schematic diagram of locating an image area of a first photographic subject provided in an embodiment of the present application;
FIG. 6h is a schematic illustration of an image overlay provided in an embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of a communication device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a first terminal device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a second terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image display method, equipment and a storage medium, which can realize that a user can watch a first image on the same first display screen without watching video images and shared contents back and forth on two display screens, the shared contents are not easy to be blocked, the shared contents on an electronic whiteboard can be watched more clearly, and real images of a conference speaker in a shared conference can be watched on the same display screen.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
At present, people can conveniently use an electronic whiteboard to hold a sharing conference. However, the two display screens are usually adopted to display video images and shared contents on the electronic whiteboard, and a user corresponding to a conference terminal needing to participate in a conference continuously shifts sight lines back and forth on the two display screens, so that the watching experience of the user is greatly influenced; or a single display screen is adopted to display the shared content, the video content of the speaker is nested on the display screen, if the user selects to watch the shared content in the electronic whiteboard, the user can only watch the video image of the speaker in the small window, which easily causes the blocking of the writing on the blackboard or the screen-projecting content of the display screen and influences the user experience, and selects to watch the video in the small window, because the shared content in the electronic whiteboard is played in the video stream, the high-definition display effect cannot be ensured at this moment, and the user experience is influenced.
In order to solve the above problem, an embodiment of the present application provides an image display method. The method is mainly applied to sharing scenes such as video conferences or live broadcasts and the like by utilizing the electronic whiteboard. Please refer to fig. 2, which is a schematic diagram of an application scenario provided in an embodiment of the present application.
As can be seen from fig. 2, the conference terminal 1, the conference terminal 2 and the conference terminal 3 join the same video conference. The speaker gives a speech in front of the electronic whiteboard, and writes a blackboard writing on the electronic whiteboard or puts in a speech manuscript and the like on the electronic whiteboard, so that shared content on the electronic whiteboard and the speaker can be shot by using an external camera, the conference terminal 1 can send the shared content on the electronic whiteboard and video images such as the speaker to the conference cloud server, and the conference cloud server forwards the shared content and the video images such as the speaker to the conference terminal 2 and the conference terminal 3.
Since the conference terminal 2 is provided with two display screens, it is possible for the conference terminal 2 to display the shared content and the video image using the two display screens, such as: the video image and the shared content are displayed using the display screen 1, and only the shared content is displayed using the display screen 2. With respect to the conference terminal 3, since the conference terminal 3 has only one display screen, such as the display screen 3, it is possible to synchronously display the shared content and the video image on the display screen 3. In other words, in the embodiment of the application, the portrait image corresponding to the speaker in the video image is superimposed on the electronic whiteboard, so that the video image and the electronic whiteboard can be displayed in a unified manner on the same display screen or display screen, the display effect of the electronic whiteboard is optimized, the shared content on the electronic whiteboard is not easily shielded, and the user viewing experience is greatly improved.
It should be noted that the foregoing conference terminals 1, 2 and 3 joining the same video conference are only an illustrative description, and may further include a conference terminal 4, a conference terminal 5 and other conference terminals in practical applications, and the number of conference terminals joining the conference is not limited in this embodiment of the application.
Therefore, referring to fig. 3 on the basis of fig. 2, a system architecture diagram according to an embodiment of the present application is provided. As can be seen from fig. 3, the system comprises at least a first terminal device, a second terminal device and a server.
The aforementioned at least one first terminal device may include a first terminal device 1, a first terminal device 2, a first terminal N (N ≧ 1), and the like, which will not be specifically described in this embodiment. The second terminal equipment receives the first shooting object and the second shooting object sent by the camera and receives the second shooting object sent by the electronic whiteboard, so that the second terminal equipment can send the first shooting object and the second shooting object to the server in a video media stream mode and send the second shooting object to the server in an electronic whiteboard shared stream mode; therefore, the server can forward the video media stream and the electronic whiteboard shared stream to at least one first conference device joining the shared conference, at the moment, each first terminal device in at least one first terminal device can generate a first image based on the video media stream and the electronic whiteboard shared stream, and the first image can reflect that the first shooting object and the second shooting object are on the same first display screen, so that each first terminal device can display the first image on the first display screen, the display effect of the electronic whiteboard is optimized, the shared content on the electronic whiteboard is not easy to be shielded, and the user viewing experience is greatly improved.
It should be noted that, at least one of the first terminal devices in fig. 3 may be the conference terminal 2, the conference terminal 3, and the like in fig. 2, or may also be a live terminal in a live scene, and a second terminal device may be the conference terminal 1 in fig. 2, and the like, which is not specifically limited in this embodiment of the application. In addition, the server mentioned in fig. 3 above may be a conference cloud server deployed on a public cloud, and capable of providing a video conference or a live broadcast; the server may also be a conference server deployed in an enterprise network, and may provide a capability of video conference or live broadcast, and the type of the server is not limited in this embodiment of the present application. Each terminal device can communicate with the server to join the same shared conference.
In addition, the at least one first terminal device described above may be a fixed terminal (e.g., the first terminal device 1 in fig. 3) disposed in a conference room, and the first terminal device may also be a mobile terminal (e.g., the first terminal device N in fig. 3), and the mobile terminal may be a device such as a mobile phone and a tablet computer. Similarly, the second conference terminal may also be a terminal device such as a fixed terminal or a mobile terminal disposed in a conference room, which is not limited in this embodiment. Optionally, some or all terminal devices in the shared conference correspond to image capturing devices, for example, the second terminal device and the first terminal device 1 correspond to image capturing devices, and the first terminal device N does not correspond to an image capturing device. The at least one first terminal device, the second terminal device and the image capturing device may be independent devices, or may be provided integrally with the image capturing device, and a specific description thereof will not be limited in this embodiment of the present application.
The method for displaying images in this embodiment may be applied to the system architecture shown in fig. 3, and may also be applied to other system architectures, which are not limited herein.
To facilitate better understanding of the proposed solution of the embodiments of the present application, the following description refers to a specific flow chart of the embodiments, and please refer to fig. 4, which is a schematic diagram of an embodiment of a method for displaying an image provided by the embodiments of the present application, where the method may include:
401. the second terminal equipment acquires a video image and an electronic whiteboard shared image, wherein the video image comprises a first shooting object and a second shooting object, and the electronic whiteboard shared image comprises the second shooting object.
In this embodiment, the image capturing apparatus may capture shared content on the conference speaker and the electronic whiteboard to generate a video image, so as to transmit the video image to the second terminal apparatus. In addition, since the electronic whiteboard is a screen of a capacitive screen or an infrared screen, it can be connected to a terminal device, such as: the electronic whiteboard can be connected with a conference terminal in a video conference, can support a conference speaker to write on a screen or throw the screen, can also support display of conference pictures and the like, is usually internally provided with a camera, and can integrate the capability of terminal equipment in hardware of the electronic whiteboard, so that the electronic whiteboard can generate an electronic whiteboard shared image according to the shared content of the conference speaker on the screen for writing on the screen or throwing the screen, and then sends the electronic whiteboard shared image to second terminal equipment.
In this way, when the second terminal device receives the video image sent by the camera device and the electronic whiteboard shared image sent by the electronic whiteboard, the second terminal device can acquire the video image and the electronic whiteboard shared image.
It should be noted that the video image includes a first photographic object and a second photographic object, and the electronic whiteboard shared image includes the second photographic object, where the first photographic object can reflect a conference speaker in the shared conference, and the second photographic object can reflect shared contents of the conference speaker projecting on the electronic whiteboard after writing.
402. And the second terminal equipment generates a video media stream corresponding to the video image and an electronic whiteboard shared stream corresponding to the electronic whiteboard shared image.
In an embodiment, after obtaining the video image, the second terminal device may first generate a corresponding video media stream, and generate a corresponding electronic whiteboard shared stream after obtaining the electronic whiteboard shared image, so that the video media stream may also include the first shooting object and the second shooting object, and the electronic whiteboard shared stream may also include the second shooting object. In this way, the second terminal device can send the video media stream and the electronic whiteboard shared stream to the server in the shared conference.
403. And the second terminal equipment sends the video media stream and the electronic whiteboard sharing stream to the server.
In an embodiment, the server mainly plays a role in information transfer, that is, after receiving the video media stream and the electronic whiteboard shared stream sent by the second terminal device, the server may forward the video media stream and the electronic whiteboard shared stream to at least one first terminal device in the sharing conference, so that the first terminal device may process the video media stream and the electronic whiteboard shared stream, so that the first shooting object in the video image and the second shooting object in the electronic whiteboard may be displayed on the same display screen or display screen in a unified manner.
It should be noted that, each of the second terminal device and the at least one first terminal device must join in the same sharing conference, so that the server can obtain the video media stream and the electronic whiteboard shared stream from the second terminal device, and forward the video media stream and the electronic whiteboard shared stream to each of the at least one first terminal device.
404. The server sends the video media stream and the electronic whiteboard sharing stream to at least one first terminal device.
405. The first terminal equipment generates a first image based on the video media stream and the electronic whiteboard shared stream, and the first image reflects the first shooting object and the second shooting object on the first display screen.
In an embodiment, after receiving a video media stream and an electronic whiteboard shared stream sent by a server, a first terminal device may generate a first image based on the video media stream and the electronic whiteboard shared stream, where the first image reflects that a first shooting object and a second shooting object are on a first display screen in a corresponding first terminal device, and a user does not need to watch video images and shared contents on the two display screens, and the shared contents are not easily blocked, and a display effect of an electronic whiteboard may be optimized, so as to improve user experience.
406. The first terminal device displays the first image on the first display screen.
In the embodiment, the display mode of the conference terminal 3 in fig. 2 may be referred to for understanding, and details thereof will not be described herein.
Optionally, in other embodiments, the method further comprises: the first terminal equipment displays the electronic whiteboard shared image on a second display screen, wherein the second display screen and the first display screen are on the same first terminal equipment.
In the embodiment, if two different display screens exist in the first terminal device, for example: the first terminal device can display the first image on the first display screen and can also display only the electronic whiteboard shared image on the other second display screen. In this way, the user can view not only the first photographic subject and the second photographic subject from the first display screen, so that the user does not need to view video images and shared content back and forth on the two display screens; and moreover, the user can watch the second shooting object from the second display screen, so that the shared content is not shielded by the first shooting object, and the display effect is improved. Specifically, the display manner of the conference terminal 2 in fig. 2 may be referred to for understanding, and the detailed description will not be repeated here.
In some embodiments, to facilitate better understanding of the solution proposed by the embodiment of the present application, the solution described in the above step 405 may be understood with reference to the following description in fig. 5, and please refer to fig. 5, which is a schematic diagram of another embodiment of a method for displaying an image provided by the embodiment of the present application, and the method may include:
s501, the first terminal device obtains a video image based on the video media stream, and obtains an electronic whiteboard shared image based on the electronic whiteboard shared stream.
And S502, the first terminal equipment performs ashing treatment on the video image to obtain a first ashed image, and performs ashing treatment on the electronic whiteboard shared image to obtain a second ashed image.
In the embodiment, since the first shooting object and the second shooting object shot by the image pickup device are color images containing certain colors, the video image is subjected to the ashing processing, which is mainly used for changing the video image with the colors into a gray image to obtain the first ashed image. The use of the first ashed image changed to the gray level can accelerate the efficiency of processing the image compared to a video image having a color. Similarly, since the shared content on the electronic whiteboard is also a color image containing a certain color, the first terminal device also needs to perform ashing processing on the shared image of the electronic whiteboard so as to obtain a second ashed image.
Optionally, in other embodiments, before the first terminal device grays the video image and the electronic whiteboard shared image, the first terminal device may further cut out a first area in the video image, where the first area includes a first shooting object and a second shooting object, and a size of the first area is equal to a size of the electronic whiteboard shared image.
In the shooting process, the camera shooting device not only shoots the conference speaker, but also shoots the electronic whiteboard. Therefore, in order to facilitate subsequent image superposition, a part of the video image can be cut, namely a first area is cut, and then the first area is subjected to amplification and projection processing, so that the size of the processed first area is equal to that of the electronic whiteboard shared image. In addition, in order to conveniently identify the electronic whiteboard, the first terminal device may identify the three preset annular squares in the electronic whiteboard, so that the first terminal device knows the size of the shared image of the electronic whiteboard. Referring to fig. 6a, for understanding the schematic diagram of identifying an electronic whiteboard provided in the embodiment of the present application, as can be seen from fig. 6a, the three preset circular squares are disposed at three corners of the electronic whiteboard, so that when the first terminal device identifies the three preset circular squares, the size of the shared image of the electronic whiteboard can be identified by calculating the distances between the three preset circular squares.
It should be noted that fig. 6a is a color chart in which a hatched box portion is indicated as the second subject, reflecting the shared content.
In addition, please refer to fig. 6b, which is a schematic diagram of capturing a first region from a video image according to an embodiment of the present application. As seen from fig. 6b, the first region includes the first photographic subject and the second photographic subject, and after enlargement and projection, the size of the first region is consistent with the size of the electronic whiteboard shared image.
It should be noted that fig. 6b is a color diagram in which a hatched box portion is represented as the second subject and a hatched character is represented as the first subject, reflecting the conference speaker.
Please refer to fig. 6c, which is a schematic diagram illustrating an ashing process performed on a video image according to an embodiment of the present disclosure. In fig. 6c, the left image is a video image before the ashing process, i.e. the video image in the left image is a color image with a certain color; and the right one is a video image after the ashing process, i.e., the first ashing image in the above step S502 (filled color of gray indicates the first photographic subject and the second photographic subject after the ashing process). Similarly, please refer to fig. 6d, which is a schematic diagram of performing ashing processing on the electronic whiteboard shared image according to the embodiment of the present application. In fig. 6d, the left one is the electronic whiteboard shared image before the ashing process, i.e. the electronic whiteboard shared image in the left one is actually a color map with a certain color; and the right one is the ashed electronic whiteboard shared image, i.e., the second ashed image in the above step S502 (the filled color of gray indicates the second subject after the ashing).
It will be appreciated that the second grayed-out image represented to the right in fig. 6d can be used as the background image.
And S503, the first terminal equipment processes the first ashing image and the second ashing image based on a preset background subtraction algorithm to obtain a third ashing image, wherein the third ashing image does not contain the second shooting object.
In an embodiment, the preset background subtraction algorithm can eliminate a background in the image, and the first terminal device can process the first and second grayed images based on the preset background subtraction algorithm to obtain a third grayed image because the second grayed image can be used as a background image. That is, the third grayed-out image does not include the second photographic subject in the second grayed-out image but includes only the first photographic subject.
Please refer to fig. 6e, which is a diagram illustrating background subtraction provided in the present application. As can be seen from fig. 6e, after the first and second grayed images in fig. 6c and 6d are subjected to the background subtraction processing described above, the finally obtained third grayed image does not include the second photographic subject but only includes the first photographic subject.
And S504, the first terminal equipment performs binarization processing on the third ashed image to obtain a second image.
In an embodiment, the first terminal device may perform binarization processing on the third ashed image through an OTSU threshold algorithm, so as to obtain the second image. It is understood that the second image actually includes the first photographic subject as well.
And S505, the first terminal equipment performs noise reduction processing on the second image to obtain a third image.
In an embodiment, after obtaining the second image, the first terminal device may also perform noise reduction processing on the second image, such as: and eliminating noise, noise points and the like, thereby obtaining a third image. It is understood that the use of the third image can minimize the amount of calculation with respect to the second image without noise reduction processing, providing a high-quality and better-definition imaging effect.
Please refer to fig. 6f, which is a schematic diagram illustrating a noise reduction process performed on a third ashed image according to an embodiment of the present disclosure. In fig. 6f, the first left is the third ashed image; and the right image is a third image (distinguished by black filling color) after binarization and noise reduction processing.
S506, the first terminal device processes the third image based on a preset boundary finding algorithm to obtain shooting position information of the first shooting object in the video image.
In an embodiment, in order to superimpose the first photographic object on the electronic whiteboard shared image, the first terminal device needs to extract the first photographic image corresponding to the first photographic object from the third image after obtaining the third image, so that the first photographic image and the electronic whiteboard shared image can be superimposed to complete the unified display of the first photographic object and the second photographic object on the same first display screen.
Therefore, the first terminal device can calculate the found boundary of the third image through a preset found boundary algorithm, so that the shooting position information of the first shooting object in the video image can be obtained. In other words, the shooting position information can indicate the specific position of the first photographic subject in the video image.
And S507, the first terminal equipment determines the image area of the first shooting object in the video image based on the shooting position information.
In the embodiment, since the third image is obtained by subjecting the video image to a series of processes, a specific position of the first photographic subject in the third image is actually coincident with a specific position in the video image. Therefore, the first terminal device can accurately locate the image area of the first photographic subject in the video image based on the photographic position information.
Please refer to fig. 6g, which is a schematic diagram illustrating positioning of an image region of a first object according to an embodiment of the present disclosure. In fig. 6g, the first left is the third image; and the portrait area R in the right one is the image area of the first photographic subject in the video image.
And S508, the first terminal device intercepts an image area from the video image to obtain a first shot image corresponding to the first shot object.
In an embodiment, after determining an image area of the first photographic object in the video image, the first terminal device may intercept the image area from the video image, so as to obtain a first photographic image corresponding to the first photographic object.
And S509, the first terminal equipment superposes the first shot image and the electronic whiteboard shared image to generate a first image.
Please refer to fig. 6h, which is a schematic diagram of image overlay in the embodiment of the present application. As can be seen from fig. 6h, after the first terminal device superimposes the first captured image and the electronic whiteboard shared image, the generated first image can reflect that the first captured object and the second captured object are on the same first display screen.
It should be understood that the processing flow for performing image superimposition described in step 405 and fig. 5 may also be processed by the server described above, and the processing performed by the first terminal device in this embodiment is only an exemplary description, and specific description thereof will not be limited in this embodiment of the application.
In the embodiment of the application, the first image is generated by the video media stream and the electronic whiteboard shared stream, so that the first image can be displayed on the same first display screen by the first terminal, a user does not need to watch the video image and the shared content back and forth on the two display screens, and the shared content is not easy to be shielded, so that the user can watch the shared content on the electronic whiteboard more clearly, and can watch the real image of a conference speaker in a shared conference on the same display screen.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. It is understood that the first terminal device, the server and the second terminal device described above include hardware structures and/or software modules for performing the respective functions in order to realize the functions described above. Those skilled in the art will readily appreciate that the functions described in connection with the embodiments disclosed herein may be implemented as hardware or a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
From the perspective of an entity device, the first terminal device, the server, and the second terminal device may be implemented by one entity device, may also be implemented by multiple entity devices together, and may also be a logic function unit in one entity device, which is not specifically limited in this embodiment of the present application.
For example, the first terminal device, the server, and the second terminal device described above may be implemented by the communication device in fig. 7. Fig. 7 is a schematic hardware structure diagram of a communication device according to an embodiment of the present application. The communication device includes at least one processor 701, memory 702, communication lines 703 and a transceiver 704.
The processor 701 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (server IC), or one or more ICs for controlling the execution of programs in accordance with the present invention.
The communication link 703 may include a path for transmitting information between the aforementioned components.
The transceiver 704 may be any device for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc. The transceiver 704 may also be a transceiving circuit or a transceiver. The communication device may also include a communication interface 706.
The memory 702 may be a read-only memory (ROM) or other types of static storage devices that may store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), but is not limited to, magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be separate and coupled to the processor 701 via a communication link 703. The memory 702 may also be integrated with the processor 701.
The memory 702 is used for storing computer-executable instructions for executing the present invention, and is controlled by the processor 701 to execute. The processor 701 is configured to execute computer-executable instructions stored in the memory 702, so as to implement the method for displaying an image provided by the above-mentioned method embodiment of the present application.
In a possible implementation manner, the computer execution instruction in the embodiment of the present application may also be referred to as an application program code, which is not specifically limited in the embodiment of the present application.
In particular implementations, processor 701 may include one or more CPUs such as CPU0 and CPU1 of fig. 7 for one embodiment.
In particular implementations, a communication device may include multiple processors, such as processor 701 and processor 705 in fig. 7, for example, as an example. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer-executable instructions).
From the perspective of functional units, the present application may divide the functional units of the first terminal device, the server, and the second terminal device according to the above method embodiments, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one functional unit. The integrated functional unit can be realized in the form of hardware or in the form of a software functional unit.
For example, in the case of dividing each functional unit in an integrated manner, fig. 8 shows a schematic structural diagram of a first terminal device. As shown in fig. 8, an embodiment of the first terminal device 80 of the present application may include:
a first receiving unit 801, configured to receive a video media stream and an electronic whiteboard shared stream sent by a server, where the video media stream includes a first shooting object and a second shooting object, the electronic whiteboard shared stream includes the second shooting object, the first shooting object reflects a conference speaker in a shared conference, the second shooting object reflects shared content on an electronic whiteboard, and each first terminal device in at least one first terminal device is connected to the shared conference;
the processing unit 802 is configured to generate a first image according to the video media stream and the electronic whiteboard shared stream, where the first image reflects that the first shooting object and the second shooting object are on the first display screen;
a display unit 803 for displaying the first image on the first display screen.
In some embodiments of the present application, the processing unit 802, comprises:
the obtaining module is used for obtaining a video image according to the video media stream and obtaining an electronic white board sharing image based on the electronic white board sharing stream;
the acquisition module is used for acquiring a first shot image corresponding to a first shot object according to the video image and the electronic whiteboard shared image;
and the generating module is used for superposing the first shot image and the electronic whiteboard shared image to generate a first image.
In some embodiments of the present application, a module is obtained comprising:
the obtaining submodule is used for carrying out ashing processing on the video image to obtain a first ashing image and carrying out ashing processing on the electronic whiteboard shared image to obtain a second ashing image;
the obtaining submodule is used for processing the first ashing image and the second ashing image based on a preset background subtraction algorithm to obtain a third ashing image, wherein the third ashing image does not contain a second shooting object;
a determination submodule for determining shooting position information of the first shooting object according to the third ashed image;
and the extraction sub-module is used for extracting a first shot image corresponding to the first shot object from the video image according to the shooting position information.
In some embodiments of the present application, a determination submodule, configured to:
performing binarization processing on the third ashed image to obtain a second image;
carrying out noise reduction processing on the second image to obtain a third image;
processing the third image based on a preset boundary finding algorithm to obtain shooting position information of the first shooting object in the video image;
correspondingly, an extraction submodule for:
determining an image area of a first shooting object in the video image according to the shooting position information;
and intercepting an image area from the video image to obtain a first shot image corresponding to the first shot object.
In some embodiments of the present application, the display unit 803 is further configured to display the electronic whiteboard shared image on a second display screen, where the second display screen is on the same first terminal device as the first display screen.
In some embodiments of the present application, the processing unit 802 is further configured to, before obtaining a first captured image corresponding to a first captured object based on the video image and the electronic whiteboard shared image, cut out a first area in the video image, where the first area includes the first captured object and a second captured object, and a size of the first area is equal to a size of the electronic whiteboard shared image.
The first terminal device 80 is described above mainly from the functional module perspective, and the server will be described below from the functional module perspective. For example, in the case where the functional units are divided in an integrated manner, fig. 9 shows a schematic structural diagram of a server. As shown in fig. 9, one embodiment of the server 90 of the present application may include:
a second receiving unit 901, configured to receive a video media stream and an electronic whiteboard shared stream sent by a second terminal device, where the video media stream includes a first shooting object and a second shooting object, the electronic whiteboard shared stream includes the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared content on an electronic whiteboard;
a first sending unit 902, configured to send the video media stream and the electronic whiteboard shared stream to the at least one first terminal device, so that each of the at least one first terminal device generates a first image, and displays the first image on a first display screen, where the first image reflects that the first photographic object and the second photographic object are on the first display screen, and the second terminal device and each of the at least one terminal device are connected to the shared conference.
The first terminal device 80 and the server 90 are described above mainly from the functional module perspective, and the second terminal device will be described below from the functional module perspective. For example, in the case of dividing each functional unit in an integrated manner, fig. 10 shows a schematic structural diagram of a second terminal device. As shown in fig. 10, an embodiment of the second terminal device 100 of the present application may include:
an obtaining unit 1001 configured to obtain a video image and an electronic whiteboard shared image, where the video image includes a first shooting object and a second shooting object, the electronic whiteboard shared image includes the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared content on an electronic whiteboard;
a generating unit 1002, configured to generate a video media stream corresponding to a video image and an electronic whiteboard shared stream corresponding to an electronic whiteboard shared image;
a second sending unit 1003, configured to send the video media stream and the electronic whiteboard shared stream to the server, so that the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, where the second terminal device and each of the at least one terminal device are connected to the sharing conference.
In some embodiments of the present application, the obtaining unit 1001 includes:
and the acquisition module is used for receiving the video image sent by the camera equipment and receiving the electronic whiteboard shared image sent by the electronic whiteboard.
The first terminal device 80, the server 90, and the second terminal device 100 provided in the embodiment of the present application are configured to execute the method in the method embodiment corresponding to any one of fig. 3 to 6h, so that the embodiment of the present application can be understood by referring to relevant parts in the method embodiment corresponding to fig. 3 to 6 h.
In the embodiment of the present application, the first terminal device 80, the server 90, and the second terminal device 100 are presented in a form of dividing each functional unit in an integrated manner. "functional unit" herein may refer to an application-specific integrated circuit (ASIC), a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that may provide the described functionality. In a simple embodiment, those skilled in the art may appreciate that the first terminal device 80, the server 90 and the second terminal device 100 may take the form shown in fig. 7.
For example, the processor 701 of fig. 7 may cause the first terminal device 80, the server 90, and the second terminal device 100 to execute the method in the method embodiment corresponding to any one of fig. 3 to fig. 6h by calling the computer stored in the memory 702 to execute the instructions.
Specifically, the functions/implementation processes of the processing unit 803, the display unit 803 and the generation unit 1002 in fig. 8 can be implemented by the processor 701 in fig. 7 invoking a computer to execute instructions stored in the memory 702, and the functions/implementation processes of the first receiving unit 801 in fig. 8 can be implemented by the transceiver 704 in fig. 7.
The functions/implementation procedures of the second receiving unit 901 and the first transmitting unit 902 in fig. 9 may be implemented by the transceiver 704 in fig. 7.
Alternatively, the function/implementation process of the generating unit 1002 in fig. 10 may be implemented by the processor 701 in fig. 7 calling a computer executing instruction stored in the memory 702, and the function/implementation process of the acquiring unit 1001 and the second transmitting unit 1003 in fig. 10 may be implemented by the transceiver 704 in fig. 7.
In the device of fig. 7, the respective components are communicatively connected, i.e., the processing unit (or processor), the storage unit (or memory) and the transceiving unit (transceiver) communicate with each other via internal connection paths, and control and/or data signals are transmitted. The above method embodiments of the present application may be applied to a processor, or the processor may implement the steps of the above method embodiments. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component. The various methods, steps, and logic blocks disclosed in this application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in this application may be directly implemented by a hardware decoding processor, or may be implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash memory, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. Although only one processor is shown in the figure, the apparatus may comprise a plurality of processors or a processor may comprise a plurality of processing units. Specifically, the processor may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
The memory is used for storing computer instructions executed by the processor. The memory may be a memory circuit or a memory. The memory may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory, a programmable read-only memory, an erasable programmable read-only memory, an electrically erasable programmable read-only memory, or a flash memory. The volatile memory may be a random access memory, which acts as an external cache. The memory may be independent of the processor, or may be a storage unit in the processor, which is not limited herein. Although only one memory is shown in the figure, the apparatus may comprise a plurality of memories or the memory may comprise a plurality of memory units.
The transceiver is used for enabling the processor to interact with the content of other elements or network elements. Specifically, the transceiver may be a communication interface of the apparatus, a transceiving circuit or a communication unit, and may also be a transceiver. The transceiver may also be a communication interface or transceiver circuit of the processor. Alternatively, the transceiver may be a transceiver chip. The transceiver may also include a transmitting unit and/or a receiving unit. In one possible implementation, the transceiver may include at least one communication interface. In another possible implementation, the transceiver may also be a unit implemented in software. In embodiments of the application, the processor may interact with other elements or network elements via the transceiver. For example: the processor obtains or receives content from other network elements through the transceiver. If the processor and the transceiver are physically separate components, the processor may interact with other elements of the apparatus without going through the transceiver.
In one possible implementation, the processor, the memory, and the transceiver may be connected to each other by a bus. The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the embodiments of the present application, various illustrations are made for the convenience of understanding. However, these examples are merely examples and are not meant to be the best mode of carrying out the present application.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof, and when implemented using software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The technical solutions provided by the present application are introduced in detail, and the present application applies specific examples to explain the principles and embodiments of the present application, and the descriptions of the above examples are only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (22)

1. A method of image display, comprising:
at least one first terminal device receives a video media stream and an electronic whiteboard shared stream sent by a server, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, the second shooting object reflects shared content on an electronic whiteboard, and each first terminal device in the at least one first terminal device is connected to the shared conference;
each first terminal device generates a first image based on the video media stream and the electronic whiteboard shared stream, wherein the first image reflects that the first shooting object and the second shooting object are on a first display screen;
each first terminal device displays the first image on the first display screen.
2. The method of claim 1, wherein each of the first terminal devices generates a first image based on the video media stream and the electronic whiteboard shared stream, comprising:
the first terminal equipment obtains a video image based on the video media stream and obtains an electronic whiteboard shared image based on the electronic whiteboard shared stream;
the first terminal equipment obtains a first shooting image corresponding to the first shooting object based on the video image and the electronic whiteboard shared image;
and the first terminal equipment superposes the first shot image and the electronic whiteboard shared image to generate a first image.
3. The method according to claim 2, wherein the obtaining, by the first terminal device, a first captured image corresponding to the first captured object based on the video image and the electronic whiteboard shared image includes:
the first terminal equipment performs ashing processing on the video image to obtain a first ashed image, and performs ashing processing on the whiteboard shared image to obtain a second ashed image;
the first terminal equipment processes the first ashing image and the second ashing image based on a preset background subtraction algorithm to obtain a third ashing image, wherein the third ashing image does not contain the second shooting object;
the first terminal device determines shooting position information of the first shooting object based on the third ashed image;
the first terminal device extracts a first shot image corresponding to the first shot object from the video image based on the shooting position information.
4. The method according to claim 3, wherein the first terminal device determines the shooting position information of the first shooting object in the video image based on the third grayed-out image, and comprises:
the first terminal equipment carries out binarization processing on the third ashing image to obtain a second image;
the first terminal equipment carries out noise reduction processing on the second image to obtain a third image;
the first terminal device processes the third image based on a preset boundary finding algorithm to obtain shooting position information of the first shooting object in the video image;
correspondingly, the acquiring, by the first terminal device, a first shot image corresponding to the first shot object from the video image based on the shooting position information includes:
the first terminal device determines an image area of the first photographic subject in the video image based on the photographic position information;
and the first terminal equipment intercepts the image area from the video image to obtain a first shot image corresponding to the first shot object.
5. The method according to any one of claims 2-4, further comprising:
and the first terminal equipment displays the electronic whiteboard shared image on a second display screen, wherein the second display screen and the first display screen are on the same first terminal equipment.
6. The method according to any one of claims 2 to 5, wherein before the first terminal device obtains the first captured image corresponding to the first captured object based on the video image and the electronic whiteboard shared image, the method further comprises:
the first terminal device cuts out a first area in the video image, the first area comprises the first shooting object and the second shooting object, and the area size of the first area is equal to the size of the electronic whiteboard shared image.
7. A method of image display, comprising:
the server receives a video media stream and an electronic whiteboard shared stream sent by second terminal equipment, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared contents on the electronic whiteboard;
the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, so that each first terminal device of the at least one first terminal device generates a first image, and displays the first image on a first display screen, wherein the first image reflects that the first shooting object and the second shooting object are on the first display screen, and the second terminal device and each first terminal device of the at least one terminal device are connected to the shared conference.
8. A method of image display, comprising:
the method comprises the steps that a second terminal device obtains a video image and an electronic whiteboard shared image, wherein the video image comprises a first shooting object and a second shooting object, the electronic whiteboard shared image comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared contents on the electronic whiteboard;
the second terminal equipment generates a video media stream corresponding to the video image and an electronic whiteboard shared stream corresponding to the electronic whiteboard shared image;
the second terminal device sends the video media stream and the electronic whiteboard shared stream to a server, so that the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, wherein the second terminal device and each first terminal device in the at least one terminal device are connected to the sharing conference.
9. The method according to claim 8, wherein the second terminal device acquires the video image and the electronic whiteboard shared image, and comprises:
and the second terminal equipment receives the video image sent by the camera equipment and receives the electronic whiteboard shared image sent by the electronic whiteboard.
10. A first terminal device, comprising:
the system comprises a first receiving unit, a second receiving unit and a third receiving unit, wherein the first receiving unit is used for receiving a video media stream and an electronic whiteboard shared stream which are sent by a server, the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, the second shooting object reflects shared contents on an electronic whiteboard, and each first terminal device in at least one first terminal device is connected to the shared conference;
the processing unit is used for generating a first image according to the video media stream and the electronic whiteboard shared stream, wherein the first image reflects the first shooting object and the second shooting object on a first display screen;
a display unit for displaying the first image on the first display screen.
11. The first terminal device of claim 10, wherein the processing unit comprises:
the obtaining module is used for obtaining a video image according to the video media stream and obtaining an electronic whiteboard shared image based on the electronic whiteboard shared stream;
the obtaining module is used for obtaining a first shooting image corresponding to the first shooting object according to the video image and the electronic whiteboard shared image;
and the generating module is used for superposing the first shooting image and the electronic whiteboard shared image to generate a first image.
12. The first terminal device of claim 11, wherein the obtaining module comprises:
the obtaining submodule is used for carrying out ashing treatment on the video image to obtain a first ashing image and carrying out ashing treatment on the electronic whiteboard shared image to obtain a second ashing image;
the obtaining submodule is used for processing the first ashing image and the second ashing image based on a preset background subtraction algorithm to obtain a third ashing image, wherein the third ashing image does not contain the second shooting object;
a determining submodule for determining the shooting position information of the first shooting object according to the third ashed image;
and the extraction sub-module is used for extracting a first shot image corresponding to the first shot object from the video image according to the shooting position information.
13. The first terminal device of claim 12, wherein the determining sub-module is configured to:
performing binarization processing on the third ashed image to obtain a second image;
carrying out noise reduction processing on the second image to obtain a third image;
processing the third image based on a preset boundary finding algorithm to obtain shooting position information of the first shooting object in the video image;
correspondingly, the extraction submodule is configured to:
determining an image area of the first shooting object in the video image according to the shooting position information;
and intercepting the image area from the video image to obtain a first shot image corresponding to the first shot object.
14. The first terminal device according to any one of claims 11-13, wherein the display unit is further configured to:
and displaying the electronic whiteboard shared image on a second display screen, wherein the second display screen and the first display screen are on the same first terminal device.
15. The first terminal device according to any one of claims 11-14, wherein the processing unit is further configured to:
before a first shot image corresponding to the first shot object is obtained based on the video image and the electronic whiteboard shared image, a first area is cut out from the video image, the first area comprises the first shot object and the second shot object, and the area size of the first area is equal to the size of the electronic whiteboard shared image.
16. A server, comprising:
the second receiving unit is used for receiving a video media stream and an electronic whiteboard shared stream sent by second terminal equipment, wherein the video media stream comprises a first shooting object and a second shooting object, the electronic whiteboard shared stream comprises the second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared contents on the electronic whiteboard;
a first sending unit, configured to send the video media stream and the electronic whiteboard shared stream to at least one first terminal device, so that each of the at least one first terminal device generates a first image, and displays the first image on a first display screen, where the first image reflects that the first photographic object and the second photographic object are on the first display screen, and the second terminal device and each of the at least one terminal device are connected to the shared conference.
17. A second terminal device, comprising:
the electronic whiteboard shared image comprises a second shooting object, the first shooting object reflects a conference speaker in a shared conference, and the second shooting object reflects shared content on the electronic whiteboard;
the generating unit is used for generating a video media stream corresponding to the video image and an electronic whiteboard shared stream corresponding to the electronic whiteboard shared image;
a second sending unit, configured to send the video media stream and the electronic whiteboard shared stream to a server, so that the server sends the video media stream and the electronic whiteboard shared stream to at least one first terminal device, where the second terminal device and each of the at least one terminal device are connected to the sharing conference.
18. The second terminal device according to claim 17, wherein the obtaining unit includes:
the acquisition module is used for receiving the video image sent by the camera equipment and receiving the electronic whiteboard shared image sent by the electronic whiteboard.
19. A first terminal device, comprising:
a processor, a memory; the processor and the memory are communicated with each other;
the memory is to store instructions;
the processor is configured to execute the instructions in the memory to perform the method of any of claims 1 to 6.
20. A server, comprising:
a processor, a memory; the processor and the memory are communicated with each other;
the memory is to store instructions;
the processor is configured to execute the instructions in the memory to perform the method of claim 7.
21. A second terminal device, comprising:
a processor, a memory; the processor and the memory are communicated with each other;
the memory is to store instructions;
the processor is configured to execute the instructions in the memory to perform the method of any of claims 8 to 9.
22. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-6, 7, or 8-9.
CN202010680176.1A 2020-07-15 2020-07-15 Image display method, equipment and storage medium Pending CN114025217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010680176.1A CN114025217A (en) 2020-07-15 2020-07-15 Image display method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010680176.1A CN114025217A (en) 2020-07-15 2020-07-15 Image display method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114025217A true CN114025217A (en) 2022-02-08

Family

ID=80053969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010680176.1A Pending CN114025217A (en) 2020-07-15 2020-07-15 Image display method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114025217A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024007819A1 (en) * 2022-07-04 2024-01-11 广州视源电子科技股份有限公司 Method and apparatus for acquiring writing on blackboard, and device, system and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0313264D0 (en) * 2003-06-09 2003-07-16 Appliance Studio The Ltd Collaboration tool
CN1503925A (en) * 2001-02-16 2004-06-09 伊马特公司 Interactive teleconferencing display system
US20040263636A1 (en) * 2003-06-26 2004-12-30 Microsoft Corporation System and method for distributed meetings
KR20130106483A (en) * 2012-03-20 2013-09-30 주식회사 다림비젼 Physical picture machine
CN203984563U (en) * 2014-08-08 2014-12-03 北京博通佳信科技有限责任公司 A kind of electronic whiteboard and video conference integrated system of realizing many picture compositions demonstrations
US20180095711A1 (en) * 2016-09-30 2018-04-05 Tomoki KANDA Communication terminal, communication system, transmission method, and recording medium storing program
US20190286255A1 (en) * 2018-03-16 2019-09-19 Ricoh Company, Ltd. Electronic whiteboard, image display method, and recording medium
CN110933331A (en) * 2019-12-06 2020-03-27 浙江蓝鸽科技有限公司 Teaching video synthesis method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503925A (en) * 2001-02-16 2004-06-09 伊马特公司 Interactive teleconferencing display system
GB0313264D0 (en) * 2003-06-09 2003-07-16 Appliance Studio The Ltd Collaboration tool
US20040263636A1 (en) * 2003-06-26 2004-12-30 Microsoft Corporation System and method for distributed meetings
KR20130106483A (en) * 2012-03-20 2013-09-30 주식회사 다림비젼 Physical picture machine
CN203984563U (en) * 2014-08-08 2014-12-03 北京博通佳信科技有限责任公司 A kind of electronic whiteboard and video conference integrated system of realizing many picture compositions demonstrations
US20180095711A1 (en) * 2016-09-30 2018-04-05 Tomoki KANDA Communication terminal, communication system, transmission method, and recording medium storing program
US20190286255A1 (en) * 2018-03-16 2019-09-19 Ricoh Company, Ltd. Electronic whiteboard, image display method, and recording medium
CN110933331A (en) * 2019-12-06 2020-03-27 浙江蓝鸽科技有限公司 Teaching video synthesis method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
M. WIENECKE等: "Towards automatic video-based whiteboard reading", 《SEVENTH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION, 2003. PROCEEDINGS》, 8 September 2003 (2003-09-08) *
熊玉梅: "无线智能电子白板会议系统", 《福建电脑 》, no. 10, 2 November 2018 (2018-11-02) *
石映辉: "文档协同编辑协作机制及应用研究", 《中国优秀硕士学位论文全文数据库》, no. 11, 15 November 2011 (2011-11-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024007819A1 (en) * 2022-07-04 2024-01-11 广州视源电子科技股份有限公司 Method and apparatus for acquiring writing on blackboard, and device, system and storage medium

Similar Documents

Publication Publication Date Title
JP6171263B2 (en) Remote conference system and remote conference terminal
US11151359B2 (en) Face swap method, face swap device, host terminal and audience terminal
CN111405173B (en) Image acquisition method and device, point reading equipment, electronic equipment and storage medium
US20230137219A1 (en) Image processing system and method in metaverse environment
CN112085775B (en) Image processing method, device, terminal and storage medium
US9998710B2 (en) Apparatus and method for providing ultra high definition video conference
US11363088B1 (en) Methods and apparatus for receiving virtual relocation during a network conference
CN111327823A (en) Video generation method and device and corresponding storage medium
US9325776B2 (en) Mixed media communication
CN110673811B (en) Panoramic picture display method and device based on sound information positioning and storage medium
US10447910B2 (en) Camera notification and filtering of content for restricted sites
CN114025217A (en) Image display method, equipment and storage medium
CN113596544A (en) Video generation method and device, electronic equipment and storage medium
US11272125B2 (en) Systems and methods for automatic detection and insetting of digital streams into a video
US20200162698A1 (en) Smart contact lens based collaborative video conferencing
US20140253670A1 (en) Information processing device, display control system, and computer program product
US9445052B2 (en) Defining a layout for displaying images
US20240007590A1 (en) Image processing method and apparatus, and electronic device, and computer readable medium
KR102420503B1 (en) Method and device for providing perfomance contents based on augmented reality
US20140267870A1 (en) Mixed media from multimodal sensors
US11825191B2 (en) Method for assisting the acquisition of media content at a scene
JP2022543159A (en) Image special effect processing method, device, electronic device and computer readable storage medium
CN110673919A (en) Screen capturing method and device
CN111292245A (en) Image processing method and device
CN113112610B (en) Information processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20220211

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination