CN110430384B - Video call method and device, intelligent terminal and storage medium - Google Patents
Video call method and device, intelligent terminal and storage medium Download PDFInfo
- Publication number
- CN110430384B CN110430384B CN201910782500.8A CN201910782500A CN110430384B CN 110430384 B CN110430384 B CN 110430384B CN 201910782500 A CN201910782500 A CN 201910782500A CN 110430384 B CN110430384 B CN 110430384B
- Authority
- CN
- China
- Prior art keywords
- video call
- image
- target objects
- analysis result
- windows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000004458 analytical method Methods 0.000 claims abstract description 41
- 238000009826 distribution Methods 0.000 claims abstract description 14
- 238000010191 image analysis Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 238000003672 processing method Methods 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010411 cooking Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a video call method, a video call device, an intelligent terminal and a storage medium, which relate to the technical field of electronics, and the method comprises the following steps: when the video call starts, analyzing the images of the same side of the collected video to obtain an analysis result; if a plurality of target objects meeting preset screen splitting conditions exist in the image according to the analysis result, increasing the number of video call windows; and distributing the images of the target objects to corresponding video call windows for display according to the specified distribution rule. By the method, the video call is carried out, the situation of interference when the video call is carried out in a space environment is avoided, and the video call quality is further improved.
Description
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a video call method and apparatus, an intelligent terminal, and a storage medium.
Background
The video call is widely applied to social life, the user can see the image of the other side through the video call, good experience is brought to the life of people, however, when a plurality of users hold the intelligent terminal to carry out online call, when at least two users carry out the video call with being in the adjacent space environment, the video call can be interfered, and the effect of the video call is not good.
Disclosure of Invention
The embodiment of the application provides a video call method and device, an intelligent terminal and a storage medium, which are used for solving the problem of poor video call effect in the prior art.
In a first aspect, an embodiment of the present application provides a video call method, where the method includes:
when the video call starts, analyzing the images of the same side of the collected video to obtain an analysis result;
if a plurality of target objects meeting preset screen splitting conditions exist in the image according to the analysis result, increasing the number of video call windows;
and distributing the images of the target objects to corresponding video call windows for display according to the specified distribution rule.
In a second aspect, an embodiment of the present invention further provides a video call device, where the device includes:
the image analysis module is used for analyzing the images of the same side of the collected videos when the video call starts to obtain an analysis result;
the video call window increasing module is used for increasing the number of video call windows if a plurality of target objects meeting the preset split screen condition exist in the image according to the analysis result;
and the image distribution module is used for distributing the images of the target objects to the corresponding video call windows for display according to the specified distribution rule.
In a third aspect, an embodiment of the present invention further provides an intelligent terminal, including:
a memory and a processor;
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and obtaining the video call method of any one of the first aspect of the program.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions are configured to cause a computer to execute any video call method in the embodiments of the present application.
According to the video call method and device, the intelligent terminal and the storage medium, when a video call is started, images of the same party of a collected video are analyzed, after an analysis result is obtained, if a plurality of target objects meeting a preset split screen condition exist in the images according to the analysis result, the number of video call windows is increased, and then the images of the target objects are distributed to the corresponding video call windows to be displayed according to a specified distribution rule. By the method, the video call is carried out, the situation of interference when the video call is carried out in a space environment is avoided, and the video call quality is further improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a video call method according to an embodiment of the present application;
FIG. 2 is an interface diagram for adjusting a video call window according to an embodiment of the present invention;
fig. 3 is a flowchart of a video image processing method according to an embodiment of the present application;
fig. 4 is an effect diagram of video image processing according to an embodiment of the present application;
fig. 5 is a diagram illustrating an effect of video image processing according to an embodiment of the present application;
fig. 6 is a flowchart of a method for adjusting a video call window according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a video call device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an intelligent terminal provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
The video call brings convenience to the life of people, meanwhile, the diversity of social life is increased, and great convenience is brought to people.
Referring to fig. 1, the video call method provided in the present application includes:
step 101: when the video call starts, the images of the same side of the collected video are analyzed to obtain an analysis result.
It should be noted that the same party of video refers to being on the same terminal side, and the present application only analyzes and explains the image on one side of video call, and other sides of video call may also adopt the method of video call with the present application, and no repeated explanation is made here. Such as: when the user a1 wants to perform a video call with the user A3, the user a2 and the user a1 are in the same space and use the same terminal to perform a video call, that is, the user a1 and the user a2 are in the same video party, and if the user A3 and the user a4 are in the same space, the user A3 and the user a4 can also use the same terminal to perform a video call, that is, the user A3 and the user a4 are in the same video party.
Optionally, in an embodiment, before step 101 is executed, it may be further determined whether the party has an intelligent split screen function. If the intelligent split screen function is turned on, step 101 and the following operations are executed. For example, when xiaoming and xiaohuang are videos of twins and parents at the far end, the xiaoming and xiaohuang use the same terminal a, and the parents share the same terminal B. If the terminal a and the terminal B both start the intelligent split screen function, the step 101 and the subsequent operations are respectively executed on the images of the terminal a and the terminal B, and then the split screen results in 4 split screens in total, wherein the small bright and the small yellow respectively account for one split screen and are displayed in the terminal B. The parents respectively occupy a split screen to be displayed in the terminal A.
If the terminal A is determined to be started with the intelligent screen splitting function and the terminal B is determined to be stopped with the intelligent screen splitting function, only the image of the terminal A is analyzed to determine whether the screen is split or not. Then 3 split screens are displayed in terminal B, i.e. yellow and bright each occupies one split screen, and parents share one split screen to be displayed in terminal a, according to the result of analyzing the split screens.
Correspondingly, if it is determined that both the terminal a and the terminal B close the split screen function, the step 101 and the subsequent operations are not executed on the pictures of both the terminal a and the terminal B.
In one embodiment, the panoramic camera is used for acquiring images of the surrounding environment in real time, and the panoramic camera is used for acquiring environment information of a target object of a video call and specific action information of the target object, such as: object a is cooking and object b is walking in the room. The panoramic camera is used for collecting the surrounding environment image to carry out video call, and the surrounding environment information where the target object is located can be obtained, so that the target object can move freely when carrying out video call, and the situation that a video call screen is suddenly white or black can not occur.
Step 102: judging whether a plurality of target objects meeting preset screen splitting conditions exist in the image or not according to the analysis result; if yes, go to step 103, otherwise go to step 104.
It should be noted that the face frame in the image can be acquired after the analysis result is obtained, if the acquired person in the face frame is in a sleep state, and the acquired expression information of the person indicates that the person in the face frame does not want to perform a video call and the like, which do not satisfy the preset screen splitting condition, screen splitting is not performed, for example, when the user a and the user B are performing a video call, image analysis is performed on the space of the user a during the call, and a baby is detected, but because the baby is younger, the person who speaks does not satisfy the preset screen splitting condition, separate screen splitting for the video call is not performed.
Step 103: the number of video call windows is increased.
Step 104: the number of video call windows is maintained.
In one embodiment, if it is determined that the number of the target objects in the image exceeds a preset threshold according to the analysis result and an instruction for adding a video call window is received, the number of the video call windows is increased.
It should be noted that, when the intelligent terminal determines that the number of target objects in the image exceeds the preset threshold according to the analysis result, the intelligent terminal determines whether the user selects an instruction to add a video call window, if the user selects the instruction to add the video call window according to the requirement, the intelligent terminal receives the instruction to add the video call window to increase the number of the video call windows, and the addition of the video call windows is prompted, as shown in fig. 2-a, if the user selects the instruction to close the addition of the video call windows, the intelligent terminal receives the instruction to close the addition of the video call windows, cancels the addition of the number of the video call windows, and the addition of the video call windows is prompted, as shown in fig. 2-B, as shown in fig. 2, fig. 2-a is a cross-sectional view of adding the video call windows, wherein, two target objects capable of video call are located on the terminal side where the user 2 is located, fig. 2-B is an interface diagram of closing an add-on video call window. As shown in fig. 2-a, when the user 2 opens the switch for adding the video call windows, and the user 1 and the user 2 perform a video call, the mobile phone interface displays three video call windows, and the images of the person 1 and the person 2 are displayed at the video interface corresponding to the user 2; as shown in fig. 2-B, when the user 2 closes the switch for adding the video call window, and the user 1 and the user 2 perform a video call, the mobile phone interface only displays two video call windows, and the user can selectively open or close the switch for adding the video call window according to the requirement.
By the method, the user can select whether to increase the video call window during the video call, and the method increases the selection mode during the video call.
Step 105: and distributing the images of the target objects to corresponding video call windows for display according to the specified distribution rule.
In one embodiment, the preset split screen condition comprises: the number of target objects with the specified characteristics exceeds a preset threshold, step 105 comprises: and distributing the images of the target objects to corresponding video call windows for display, wherein the number of the target objects is the same as that of the video call windows.
By the method, the one-to-one correspondence between the target object and the video call window can be realized during split screen.
In an embodiment, the panoramic camera is a camera provided with an infrared detection device, so that the images collected by the panoramic camera provided with the infrared detection device include depth images.
Note that the depth image includes a distance value between the target object and the panoramic camera. Based on the camera with the infrared detection device, step 103, after increasing the number of the video call windows, a video image processing method may be further included, as shown in fig. 3.
Step 301: judging whether the distance between the depth image display target object and the panoramic camera exceeds a preset distance threshold value or not; if yes, go to step 302, otherwise go to step 303.
Step 302: and increasing the object distance value of the panoramic camera to refocus the target object.
Step 303: and keeping the object distance value of the panoramic camera focused on the target object.
Step 304: and displaying the refocused image on a video call window corresponding to the target object.
It should be noted that, if the distance between the target object and the panoramic camera exceeds the preset threshold, in the video call window corresponding to the target object by the above-mentioned video image processing method, the image of the target object displayed becomes larger than the image without image processing, as shown in fig. 4, the image of the user in the video call window before image processing is not performed is smaller as shown in fig. 4-a, and after image processing, the image of the user becomes larger as shown in fig. 4-B.
In addition, it should be noted that, if the distance between the target object and the panoramic camera exceeds the preset threshold, the user may manually adjust the size of the user image in the video call window, as shown in fig. 5, and manually adjust the size and the position of the user image in the video call window in fig. 5-a to obtain the user image in fig. 5-B.
According to the method, when the video call is carried out, the image of the video call window is adjusted, and when the target object exceeds the preset distance value, the image of the user can be automatically adjusted in the video call window, so that the video call effect is improved.
In one embodiment, after increasing the number of video call windows in step 103, a method for changing the video call windows according to the change of the target object is further included, as shown in fig. 6.
Step 601: and continuing to collect the image, and analyzing the collected current frame image to obtain an analysis result of the current frame image.
Step 602: judging whether the number of the target objects is increased or decreased according to the analysis result of the current frame image; if it is determined that the number of the target objects increases according to the analysis result of the current frame image, step 603 is performed, and if it is determined that the number of the target objects decreases according to the analysis result of the current frame image, step 604 is performed.
Step 603: and increasing the number of the video call windows, wherein the increased number of the video call windows is the increased number of the target objects.
Step 604: and determining a video call window corresponding to the reduced target object.
Step 605: and closing the video call window corresponding to the reduced target object.
It should be noted that, when a video call is performed, a database with target objects may be set and maintained in the intelligent terminal or the server in the background, when the call is started, the database related to the call may be set to be empty, and then after the target objects are identified, a one-to-one correspondence relationship between each target object and the video call window may be established. Optionally, as shown in table 1, the target object identifier may be used to identify a unique object participating in the current video session, and the target object feature is used to represent a feature of a corresponding target object, for example, a unique window of the current video session may be represented by a video call window identifier, which may be a face feature, a human body feature, or the like. It should be noted that table 1 is only used for illustrating the embodiments of the present application and is not used to limit the embodiments of the present application.
TABLE 1
Target object identification | Target object features | Video call window identification |
1 | Human face features | I |
… | *** | *** |
n | Human face features | N |
After the database table shown in table 1 is established, after each frame of image is collected, the object features in the newly collected image can be identified through image analysis, and then matched with the target object features in table 1, so as to determine to which video call window the video frame data of the identified target object should be distributed. In addition, if the target object characteristics are not matched, it is indicated that new target objects are added, and when the number of target objects is increased, corresponding records are added in table 1, and new video call windows are added without affecting the video calls already created. Similarly, when the number of the target objects is reduced, the video call window corresponding to the target object is correspondingly closed. If the target object features are not matched, it may be determined that the objects participating in the video call may be reduced, and if the continuous multi-frame images do not match the target object features for the same target object features, it may be determined that the target object exits the video call, so that the corresponding video call window may be closed, and the corresponding record may be deleted from table 1. For example, target object A and target object B are in video calls in two different ambient spaces, the target object C appears in the environment space where the first is positioned, at the moment, the target object starts the function of adding a video call window, record information about the target object c is added in table 1, a new video call window is added for the target object c, the video call between the A, the B and the C is not influenced, if the A, the B and the C are three target objects to carry out the video call, the D is in the environment space where the A is positioned, record information about the target object D is added in the table 1, if the target object C leaves the environmental space where the A is located when four persons have video calls, and deleting the recorded information of the target object C in the corresponding table 1, further closing the video call window corresponding to the target object C, and dynamically adjusting the number of the video call windows by the mode.
In one embodiment, in order to avoid infinitely increasing the number of video call windows, before increasing the number of video call windows, it is further determined that the number of target objects is less than a specified number, wherein the specified number is greater than the preset threshold. The specified quantity can be determined as a fixed value according to actual requirements, and can also be set by a user of the video call.
For example, when the user is in a video call and the number of target objects increases, the video call window is not infinitely increased, such as: the preset threshold value is 1, the designated number is 4, when the number of the target objects is increased to 4, the number of the video call windows is not increased, one video call window is displayed, and if the number of the target objects is increased to 3, the number of the video call windows is increased.
The resources of the video call window are reasonably configured in the mode, and the effect of video call is improved.
Referring to fig. 7, a video call device according to an embodiment of the present application includes: an image analysis module 71, a video call window adding module 72, and an image distribution module 73.
It should be noted that the image analysis module 71 is configured to analyze the images of the same side of the acquired video when the video call starts, so as to obtain an analysis result.
And a video call window increasing module 72, configured to increase the number of video call windows if it is determined that multiple target objects meeting a preset split-screen condition exist in the image according to the analysis result.
And the image distribution module 73 is used for distributing the images of the target objects to the corresponding video call windows for display according to the specified distribution rule.
Optionally, the preset split screen condition includes: the number of target objects with the specified characteristics exceeds a preset threshold value;
the image distribution module 73 is configured to:
and distributing the images of the target objects to corresponding video call windows for display, wherein the number of the target objects is the same as that of the video call windows.
Optionally, the video call window adding module 72 is configured to:
and if the number of the target objects in the image is determined to exceed a preset threshold value according to the analysis result and an instruction for increasing the video call window is received, increasing the number of the video call windows.
Optionally, the image analysis module 71 is further configured to: and before analyzing the collected images, collecting the images of the surrounding environment in real time through the panoramic camera.
Optionally, the panoramic camera is a camera provided with an infrared detection device, and the image acquired by the panoramic camera includes a depth image;
the video call window adding module 72 is configured to, if it is determined according to the analysis result that a plurality of target objects meeting a preset split-screen condition exist in the image, after the number of video call windows is increased, further:
if the distance between the target object and the panoramic camera displayed by the depth image exceeds a preset distance threshold, increasing the object distance value of the panoramic camera to refocus the target object;
and displaying the refocused image on a video call window corresponding to the target object.
Optionally, the video call window adding module 72 is configured to, if it is determined according to the analysis result that a plurality of target objects meeting a preset split screen condition exist in the image, after the number of video call windows is increased, further:
continuing to collect the image, and analyzing the collected current frame image to obtain an analysis result of the current frame image;
and if the number of the target objects is determined to be increased according to the analysis result of the current frame image, increasing the number of the video call windows, wherein the increased number of the video call windows is the increased number of the target objects.
Optionally, before the video call window adding module 72 increases the number of video call windows, the apparatus further includes:
the target object determination module is used for determining that the number of the target objects is smaller than a specified number, wherein the specified number is larger than the preset threshold value.
Optionally, the apparatus further comprises: the video call window reducing module is used for determining a video call window corresponding to a reduced target object if the number of the target objects is reduced according to the analysis result of the current frame image;
and closing the video call window corresponding to the reduced target object.
After the video call method and apparatus in the exemplary embodiment of the present application are introduced, a smart terminal in another exemplary embodiment of the present application is introduced next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, a smart terminal according to the present application may include at least one processor, and at least one memory. Wherein the memory stores a computer program which, when executed by the processor, causes the processor to perform the steps of the video call method according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps 101-105 as shown in FIG. 1.
The smart terminal 130 according to this embodiment of the present application is described below with reference to fig. 8. The smart terminal 130 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, the smart terminal 130 is represented in the form of a general smart terminal. The components of the intelligent terminal 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
The intelligent terminal 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.) and/or any device (e.g., router, modem, etc.) that enables the intelligent terminal 130 to communicate with one or more other intelligent terminals. Such communication may occur via input/output (I/O) interfaces 135. Also, the intelligent terminal 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 136. As shown, the network adapter 136 communicates with other modules for the intelligent terminal 130 over the bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the smart terminal 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, the aspects of the control method of the smart terminal provided in the present application may also be implemented in the form of a program product including a computer program for causing a computer device to perform the steps in the video call method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device, for example, the computer device may perform the steps 101-105 as shown in fig. 1.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for video call of the embodiment of the present application may employ a portable compact disc read only memory (CD-ROM) and include a computer program, and may be run on a smart terminal. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer program embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer programs for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer program may execute entirely on the target object smart terminal, partly on the target object device, as a stand-alone software package, partly on the target object smart terminal and partly on a remote smart terminal, or entirely on the remote smart terminal or server. In the case of remote intelligent terminals, the remote intelligent terminals may be connected to the target object intelligent terminal through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to external intelligent terminals (for example, through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having a computer-usable computer program embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (14)
1. A method for video telephony, the method comprising:
when a video call starts, analyzing images of the same side of the collected video to obtain an analysis result, wherein the same side of the video is the same terminal side;
if a plurality of target objects meeting preset screen splitting conditions exist in the image according to the analysis result, increasing the number of video call windows; the preset screen splitting condition comprises the following steps: the number of target objects with the specified characteristics exceeds a preset threshold value; the method comprises the following steps:
if the number of the target objects in the image is determined to exceed a preset threshold value according to the analysis result and an instruction for increasing the video call window is received, increasing the number of the video call windows;
distributing the image of each target object to a corresponding video call window for display according to a specified distribution rule, wherein the distribution rule comprises the following steps:
and distributing the images of the target objects to corresponding video call windows for display, wherein the number of the target objects is the same as that of the video call windows.
2. The method of claim 1, wherein acquiring an image comprises:
and acquiring images of the surrounding environment in real time through the panoramic camera.
3. The method according to claim 2, wherein the panoramic camera is a camera provided with an infrared detection device, and the images collected by the panoramic camera include depth images;
if the number of the target objects in the image is determined to exceed the preset threshold value according to the analysis result, after the number of the video call windows is increased, the method further comprises the following steps:
if the distance between the target object and the panoramic camera displayed by the depth image exceeds a preset distance threshold, increasing the object distance value of the panoramic camera to refocus the target object;
and displaying the refocused image on a video call window corresponding to the target object.
4. The method according to claim 1, wherein if it is determined according to the analysis result that a plurality of target objects satisfying a preset split-screen condition exist in the image, after increasing the number of video call windows, the method further comprises:
continuing to collect the image, and analyzing the collected current frame image to obtain an analysis result of the current frame image;
and if the number of the target objects is determined to be increased according to the analysis result of the current frame image, increasing the number of the video call windows, wherein the increased number of the video call windows is the increased number of the target objects.
5. The method of claim 1 or 4, wherein prior to said increasing the number of video call windows, the method further comprises:
determining that the number of the target objects is smaller than a specified number, wherein the specified number is larger than the preset threshold value.
6. The method of claim 4, further comprising:
if the number of the target objects is determined to be reduced according to the analysis result of the current frame image, determining a video call window corresponding to the reduced target objects;
and closing the video call window corresponding to the reduced target object.
7. A video call apparatus, the apparatus comprising:
the image analysis module is used for analyzing the images of the same side of the collected videos to obtain an analysis result when the video call starts, wherein the same side of the videos is the same terminal side;
the video call window increasing module is used for increasing the number of video call windows if a plurality of target objects meeting the preset split screen condition exist in the image according to the analysis result; the preset screen splitting condition comprises the following steps: the number of target objects with the specified characteristics exceeds a preset threshold value; if the number of the target objects in the image is determined to exceed a preset threshold value according to the analysis result and an instruction for increasing the video call window is received, increasing the number of the video call windows;
the image distribution module is used for distributing the images of all the target objects to the corresponding video call windows for display according to the specified distribution rule; and distributing the images of the target objects to corresponding video call windows for display, wherein the number of the target objects is the same as that of the video call windows.
8. The apparatus of claim 7, wherein the image analysis module is further configured to: and before analyzing the collected images, collecting the images of the surrounding environment in real time through the panoramic camera.
9. The device of claim 8, wherein the panoramic camera is a camera provided with an infrared detection device, and the images collected by the panoramic camera include a depth image;
the video call window adding module is used for increasing the number of video call windows if a plurality of target objects meeting a preset split screen condition exist in the image according to the analysis result, and is also used for:
if the distance between the target object and the panoramic camera displayed by the depth image exceeds a preset distance threshold, increasing the object distance value of the panoramic camera to refocus the target object;
and displaying the refocused image on a video call window corresponding to the target object.
10. The apparatus according to claim 7, wherein the video call window increasing module is configured to, if it is determined according to the analysis result that a plurality of target objects satisfying a preset split-screen condition exist in the image, increase the number of video call windows, further:
continuing to collect the image, and analyzing the collected current frame image to obtain an analysis result of the current frame image;
and if the number of the target objects is determined to be increased according to the analysis result of the current frame image, increasing the number of the video call windows, wherein the increased number of the video call windows is the increased number of the target objects.
11. The apparatus of claim 7 or 10, wherein before the video call window increasing module increases the number of video call windows, the apparatus further comprises:
the target object determination module is used for determining that the number of the target objects is smaller than a specified number, wherein the specified number is larger than the preset threshold value.
12. The apparatus of claim 10, further comprising: the video call window reducing module is used for determining a video call window corresponding to a reduced target object if the number of the target objects is reduced according to the analysis result of the current frame image;
and closing the video call window corresponding to the reduced target object.
13. An intelligent terminal, comprising: a memory and a processor;
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 6 in accordance with the obtained program.
14. A computer storage medium storing computer-executable instructions for performing the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910782500.8A CN110430384B (en) | 2019-08-23 | 2019-08-23 | Video call method and device, intelligent terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910782500.8A CN110430384B (en) | 2019-08-23 | 2019-08-23 | Video call method and device, intelligent terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110430384A CN110430384A (en) | 2019-11-08 |
CN110430384B true CN110430384B (en) | 2020-11-03 |
Family
ID=68417310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910782500.8A Active CN110430384B (en) | 2019-08-23 | 2019-08-23 | Video call method and device, intelligent terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110430384B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111131754A (en) * | 2019-12-25 | 2020-05-08 | 视联动力信息技术股份有限公司 | Control split screen method and device of conference management system |
CN111583251A (en) * | 2020-05-15 | 2020-08-25 | 国网浙江省电力有限公司信息通信分公司 | Video image analysis method and device and electronic equipment |
CN114071056B (en) * | 2020-08-06 | 2022-08-19 | 聚好看科技股份有限公司 | Video data display method and display device |
CN113473061B (en) * | 2021-06-10 | 2022-08-12 | 荣耀终端有限公司 | Video call method and electronic equipment |
CN114466154A (en) * | 2022-01-17 | 2022-05-10 | 珠海读书郎软件科技有限公司 | Split-screen video call method and device based on multi-screen telephone watch |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102446065A (en) * | 2010-09-30 | 2012-05-09 | 索尼公司 | Information processing apparatus and information processing method |
WO2018093197A2 (en) * | 2016-11-18 | 2018-05-24 | 삼성전자 주식회사 | Image processing method and electronic device supporting image processing |
CN109151367A (en) * | 2018-10-17 | 2019-01-04 | 维沃移动通信有限公司 | A kind of video call method and terminal device |
CN109151309A (en) * | 2018-08-31 | 2019-01-04 | 北京小鱼在家科技有限公司 | A kind of method for controlling rotation of camera, device, equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105159578A (en) * | 2015-08-24 | 2015-12-16 | 小米科技有限责任公司 | Video display mode switching method and apparatus |
CN106899800B (en) * | 2016-06-28 | 2020-02-14 | 阿里巴巴集团控股有限公司 | Camera focusing method and device and mobile terminal equipment |
US11212326B2 (en) * | 2016-10-31 | 2021-12-28 | Microsoft Technology Licensing, Llc | Enhanced techniques for joining communication sessions |
CN107770477B (en) * | 2017-11-07 | 2019-09-10 | Oppo广东移动通信有限公司 | Video call method, device, terminal and storage medium |
CN109089067A (en) * | 2018-09-12 | 2018-12-25 | 深圳市沃特沃德股份有限公司 | Videophone and its image capture method, device and computer readable storage medium |
CN109561257B (en) * | 2019-01-18 | 2020-09-18 | 深圳看到科技有限公司 | Picture focusing method, device, terminal and corresponding storage medium |
-
2019
- 2019-08-23 CN CN201910782500.8A patent/CN110430384B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102446065A (en) * | 2010-09-30 | 2012-05-09 | 索尼公司 | Information processing apparatus and information processing method |
WO2018093197A2 (en) * | 2016-11-18 | 2018-05-24 | 삼성전자 주식회사 | Image processing method and electronic device supporting image processing |
CN109151309A (en) * | 2018-08-31 | 2019-01-04 | 北京小鱼在家科技有限公司 | A kind of method for controlling rotation of camera, device, equipment and storage medium |
CN109151367A (en) * | 2018-10-17 | 2019-01-04 | 维沃移动通信有限公司 | A kind of video call method and terminal device |
Non-Patent Citations (1)
Title |
---|
基于 Android 即时视频通话系统的设计与实现;李垭超;《中国优秀硕士论文全文数据库信息科技辑》;20180615;1-73页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110430384A (en) | 2019-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110430384B (en) | Video call method and device, intelligent terminal and storage medium | |
CN110266879B (en) | Playing interface display method, device, terminal and storage medium | |
US9641801B2 (en) | Method, apparatus, and system for presenting communication information in video communication | |
CN106210797B (en) | Network live broadcast method and device | |
CN111460112A (en) | Online customer service consultation method, device, medium and electronic equipment | |
KR102119404B1 (en) | Interactive information providing system by collaboration of multiple chatbots and method thereof | |
CN109670632B (en) | Advertisement click rate estimation method, advertisement click rate estimation device, electronic device and storage medium | |
CN107748690A (en) | Using jump method, device and computer-readable storage medium | |
CN112799622A (en) | Application control method and device and electronic equipment | |
CN117238451A (en) | Training scheme determining method, device, electronic equipment and storage medium | |
CN110190975B (en) | Recommendation method and device for people to be referred, terminal equipment and storage medium | |
CN113656637B (en) | Video recommendation method and device, electronic equipment and storage medium | |
CN114422854A (en) | Data processing method and device, electronic equipment and storage medium | |
CN104793911B (en) | Processing method, device and terminal is presented using split screen | |
WO2023143518A1 (en) | Live streaming studio topic recommendation method and apparatus, device, and medium | |
KR102286578B1 (en) | Method and computer program for controlling display of consultation session | |
CN112019948B (en) | Intercommunication method for intercom equipment, intercom equipment and storage medium | |
CN113763137B (en) | Information pushing method and computer equipment | |
CN112291581B (en) | Server, terminal equipment, information processing method and device | |
CN114727119B (en) | Live broadcast continuous wheat control method, device and storage medium | |
KR20190030549A (en) | Method, system and non-transitory computer-readable recording medium for controlling flow of advertising contents based on video chat | |
CN114666643A (en) | Information display method and device, electronic equipment and storage medium | |
CN114745573A (en) | Video control method, client, server and system | |
CN110493473A (en) | Method, equipment and the computer storage medium of caller identification | |
CN116405736B (en) | Video recommendation method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |