CN110266994A - Video call method, video call device and terminal - Google Patents
Video call method, video call device and terminal Download PDFInfo
- Publication number
- CN110266994A CN110266994A CN201910561823.4A CN201910561823A CN110266994A CN 110266994 A CN110266994 A CN 110266994A CN 201910561823 A CN201910561823 A CN 201910561823A CN 110266994 A CN110266994 A CN 110266994A
- Authority
- CN
- China
- Prior art keywords
- face data
- target
- video
- image
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000001815 facial effect Effects 0.000 claims description 57
- 238000004590 computer program Methods 0.000 claims description 22
- 238000012216 screening Methods 0.000 claims description 12
- 230000008439 repair process Effects 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 24
- 230000006854 communication Effects 0.000 abstract description 6
- 238000004891 communication Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Telephone Function (AREA)
Abstract
The application is applicable to the technical field of communication, and provides a video call method, a video call device and a terminal, wherein the method comprises the following steps: acquiring a video image sent by a target opposite terminal; according to the video image, under the condition that the video image contains a face image, acquiring target face data matched with the face image from a preset face database; performing face definition restoration on the video image based on the target face data to obtain a target video image; and displaying the target video image, improving the face image quality in the video call process, improving the image definition and realizing high-definition video call.
Description
Technical field
The application belongs to field of communication technology more particularly to a kind of video call method, video conversation apparatus and terminal.
Background technique
With social development, the safety of student and age small child are valued by people.Parent would generally give child
It is equipped with the electronic equipments such as phone wrist-watch, in order to be able to carry out video or phone contact with child in time, and realize that child loses
Positioning when mistake is given for change.
And the equipment such as Current telephony wrist-watch would generally be by factors shadows such as camera quality, network quality, power consumption, environment light
It rings, in video calling, the portrait that opposite end is shown is relatively fuzzyyer, and video calling experience is not fine.Currently common practice is
The pixel of raising camera, but the raising of camera pixel, still can not offset influence of the other factors to video definition,
It can not really realize that HD video is conversed.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of video call method, video conversation apparatus and terminal, to solve
It is influenced in the prior art by factors such as camera quality, network quality, power consumption, environment light, portrait clarity is not in video calling
High problem.
The first aspect of the embodiment of the present application provides a kind of video call method, comprising:
Obtain the video image that target opposite end is sent;
According to the video image, in the case where in determining the video image comprising facial image, from default face
The target human face data to match with the facial image is obtained in database;
Based on the target human face data, the reparation of face clarity is carried out to the video image, obtains target video figure
Picture;
Show the target video image.
The second aspect of the embodiment of the present application provides a kind of video conversation apparatus, comprising:
First obtains module, for obtaining the video image of target opposite end transmission;
Second obtains module, for according to the video image, including facial image in determining the video image
In the case of, the target human face data to match with the facial image is obtained from default face database;
Repair module carries out the reparation of face clarity to the video image, obtains for being based on the target human face data
To target video image;
Display module, for showing the target video image.
The third aspect of the embodiment of the present application provides a kind of terminal, including memory, processor and is stored in described
In memory and the computer program that can run on the processor, the processor are realized when executing the computer program
The step of method as described in relation to the first aspect.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, and the step of method as described in relation to the first aspect is realized when the computer program is executed by processor
Suddenly.
The 5th aspect of the application provides a kind of computer program product, and the computer program product includes computer
Program is realized when the computer program is executed by one or more processors such as the step of above-mentioned first aspect the method.
Therefore in the embodiment of the present application, the video image sent by obtaining target opposite end is determining video image
In comprising obtaining the target human face data to match with facial image from default face database in the case where facial image;
Based on the target human face data, the reparation of face clarity is carried out to video image, target video image is obtained and shows, do not increasing
The flow and power consumption and under the premise of not changing opposite end hardware setting for adding opposite end, promote the face image quality in video call process,
Image sharpness is promoted, realizes HD video call.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of flow chart one of video call method provided by the embodiments of the present application;
Fig. 2 is a kind of flowchart 2 of video call method provided by the embodiments of the present application;
Fig. 3 is a kind of structure chart of video conversation apparatus provided by the embodiments of the present application;
Fig. 4 is a kind of structure chart of terminal provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal described in the embodiment of the present application is including but not limited to such as with touch sensitive surface
The mobile phone, laptop computer or tablet computer of (for example, touch-screen display and/or touch tablet) etc it is other just
Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but there is touching
Touch the desktop computer of sensing surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal including display and touch sensitive surface is described.It is, however, to be understood that
It is that terminal may include one or more of the other physical user-interface device of such as physical keyboard, mouse and/or control-rod.
Terminal supports various application programs, such as one of the following or multiple: drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application
Program, digital music player application and/or video frequency player application program.
The various application programs that can be executed at the terminal can be used such as touch sensitive surface at least one is public
Physical user-interface device.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitive table
The corresponding information shown in the one or more functions and terminal in face.In this way, the public physical structure of terminal is (for example, touch
Sensing surface) it can support the various application programs with user interface intuitive and transparent for a user.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in the present embodiment, each process
Execution sequence should be determined by its function and internal logic, and the implementation process without coping with the embodiment of the present application constitutes any restriction.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
It is a kind of flow chart one of video call method provided by the embodiments of the present application referring to Fig. 1, Fig. 1.As shown in Figure 1,
A kind of video call method, method includes the following steps:
Step 101, the video image that target opposite end is sent is obtained.
The executing subject of the video call method is the electronic equipment with video call function, can be mobile phone, plate
Computer, phone wrist-watch etc..
The target opposite end is the electronic equipment with video call function, equally can be mobile phone, tablet computer, phone hand
Table etc..Specifically optionally, which is mobile phone or tablet computer, and target opposite end is phone wrist-watch, and but not limited to this.
In the step, after video is connected between executing subject and target opposite end, executing subject starts to get target pair
The video image of transmission is held, to carry out next treatment process.
Step 102, according to the video image, in the case where in determining the video image comprising facial image, from
The target human face data to match with the facial image is obtained in default face database.
After the video image for getting opposite end transmission, need first to judge whether comprising facial image in video image, such as
Fruit recognizes comprising facial image, then needs further to obtain the target face of storage to match from default face database
Data.
Wherein, the target human face data to match with the facial image is obtained from default face database, comprising: from
The target human face data to match with the face display feature in the facial image is obtained in default face database;Or from
The associated human face data of object contact person information corresponding with the target opposite end is obtained in default face database, by the people
Face data are determined as the target human face data to match with facial image;Or it is obtained and the people from default face database
Face in face image shows what feature matched, and the associated people of object contact person information corresponding with the target opposite end
Face data are target human face data.
Specifically, as an optional embodiment, wherein described to be obtained and the face from default face database
The target human face data that image matches, comprising:
Judge in address list with the presence or absence of the corresponding object contact person information in the target opposite end;
In the address list in the case where object contact person information corresponding there are the target opposite end, preset from described
In face database, Initial Face data associated with the object contact person information are obtained;
The Initial Face data are determined as to the target human face data to match with the facial image;
Wherein, the human face data in the presetting database respectively from the different contact informations in the address list one by one
It is corresponding.
During being somebody's turn to do, the human face data preset in face database is associated with the contact information in address list.One
Human face data corresponds to a contact information.
Wherein, default face database can store in local database, or be stored in other equipment or
Default face database in person's cloud device, does not limit specifically herein.
It during specific implementation, needs in advance to model human face data: being, for example, it is preparatory by APP in mobile phone
The face information that acquisition can be restored by high definition, carries out the modeling of high definition face, and modeling data is stored in mobile phone client database.
Specifically, the building of default face database can be realized by different technological means.
As an optional embodiment, wherein it is described according to the video image, it is wrapped in determining the video image
In the case where containing facial image, obtained from default face database the target human face data that matches with the facial image it
Before, further includes:
Facial image in different video callings is acquired, preliminary human face data is obtained;
Screening meets the screening human face data of articulation index requirement from the preliminary human face data;
Determine the contact information of opposite end corresponding to the different video callings;
The screening human face data and the contact information are associated storage, obtain the default human face data
Library.
The process, in face database building, the different video communication process that needs to carry out between different opposite ends
In, carry out the acquisition of human face data.It screens the satisfactory human face data of articulation index and believes with the contact person of corresponding opposite end
Breath is associated storage, realizes the generation of data and the building of face database in face database, can save specially to not
With the process of the identification acquisition of contact person's face, realize that process is more convenient, and can be with the continuous progress of video calling, it can be with
Realize human face data it is continuous correct with it is perfect.
As another optional embodiment, wherein it is described according to the video image, in determining the video image
In the case where comprising facial image, the target human face data to match with the facial image is obtained from default face database
Before, further includes:
Human face data acquisition interface is exported, shows different contact informations in the human face data acquisition interface;
In the case where the trigger collection for receiving human face data inputs, human face data is carried out by image capture device and is adopted
Collection, wherein a trigger collection input corresponds to a contact information;
The human face data collected and the contact information are associated storage, obtain the default human face data
Library.
The process needs to show human face data acquisition interface, different connection is selected for user in face database building
It is people's information, and be acquired to the human face data of corresponding relationship people, and then by the human face data of Related Contact and to reply
The contact information at end is associated storage, realizes the generation of data and the building of face database in face database, specially
Collection process the acquisition for being more clear complete face information may be implemented.
Specifically, the collection process can be acquired the face information of contact person, and store when first used,
Or carried out in adding information of contact person, it does not limit specifically herein.
Step 103, it is based on the target human face data, the reparation of face clarity is carried out to the video image, obtains mesh
Mark video image.
When being repaired to video image progress face clarity, specially the facial image in video image is carried out
Clarity reparation, to the face in video image, this regional area is repaired, and face part is aobvious in promotion general image
Show clarity.
During specific implementation, it is illustrated by taking mobile phone and phone wrist-watch as an example.When parent's mobile phone and student's phone hand
Video between table is connected, and respective application APP carries out face extraction from call video picture in mobile phone, by face and APP number
It is compared according to the modeling data in library, when there is the human face data to match, high definition human face data being integrated into mobile phone and is shown
In the video image shown, mobile phone terminal realizes the reparation of portrait clarity, realizes the display of high-definition portrait, and when there is no match
Human face data when, whole process terminates.The process, under the premise of not increasing wrist-watch end hardware device and hardware cost, energy
Enough portraits in mobile phone terminal high definition also original video promote video calling experience.
Step 104, the target video image is shown.
After repairing to video image, target video image is directly displayed, realizes the video for sending opposite end
It is directly repaired in transmittance process, to directly display out the target video image after repairing on local terminal display screen, is promoted
Display effect.
Wherein, it when handling the video image that opposite end is sent, specially receives a frame video image and then handles
One frame, and the target video image that one frame of corresponding display is repaired, a frame connect such processing of a frame, realize that entire video is logical
The reparation and improvement of the clarity of portrait during crossing.
In the embodiment of the present application, the video image sent by obtaining target opposite end includes people in determining video image
In the case where face image, the target human face data to match with facial image is obtained from default face database;Based on the mesh
Human face data is marked, the reparation of face clarity is carried out to video image, target video image is obtained and shows, do not increasing opposite end
Flow and power consumption and under the premise of not changing opposite end hardware setting, promote the face image quality in video call process, promote picture
Clarity realizes HD video call.
The different embodiments of video call method are additionally provided in the embodiment of the present application.
Referring to fig. 2, Fig. 2 is a kind of flowchart 2 of video call method provided by the embodiments of the present application.As shown in Fig. 2,
A kind of video call method, method includes the following steps:
Step 201, the video image that target opposite end is sent is obtained.
The implementation process of the step is identical as the implementation process of step 101 in aforementioned embodiments, and details are not described herein again.
Step 202, according to the video image, in the case where in determining the video image comprising facial image, from
The target human face data to match with the facial image is obtained in default face database.
In the specific implementation, it can be and extract the image in primary video every setting time interval, using vision library
Whether there is face in the Face datection tool detection image of OpenCV, matches and saved in the face and default face database of acquisition
The similarity of human face data replaced with the high definition face in default face database former if human face similarity degree is greater than 70%
Carry out the face of image in video.
As an optional embodiment, wherein the target human face data is three-dimensional face data;It is described based on described
Target human face data carries out the reparation of face clarity to the video image, obtains target video image, comprising:
Step 203, it is based on the facial image, determines that the face in the video image shows feature.
Wherein, which shows that feature includes that face angles of display, face display size, face display scale, face are aobvious
Show profile, face display area, human face five-sense-organ distribution at least one of.
Feature is shown by the face parsed from facial image, to combine three-dimensional face data, is realized from default people
The face alternative site being consistent with current face's image is found out in target human face data in face database.
Step 204, feature is shown based on the face, is obtained from the three-dimensional face data special with face display
Levy the face alternative site to match.
Wherein, which can be face display area under different face angles of display, different people
The face display area under face display area, different faces display scale, different faces under face display size show profile
Under face display area, the face display area under different faces display area and/or different faces face distribution under people
Face display area.
Step 205, it obtains face corresponding with the face alternative site and replaces image.
Based on the face alternative site determined from three-dimensional face data, corresponding face replacement image is obtained, to carry out
The replacement of corresponding human face region, realizes the promotion of image display degree.
Step 206, image is replaced according to the face, the facial image in the video image is replaced,
Obtain the target video image.
Wherein, the corresponding face display resolution of the three-dimensional face data is clear greater than the display of the facial image
Degree.
In the step, the method that face clarity is promoted is realized, be that the higher human face region of clarity is shown that image is straight
It connects and is replaced in corresponding face location, the low human face region of original clarity is replaced, promote the people in video image
Face display resolution.
Step 207, the target video image is shown.
The implementation process of the step is identical as the implementation process of step 104 in aforementioned embodiments, and details are not described herein again.
In the embodiment of the present application, the video image sent by obtaining target opposite end includes people in determining video image
In the case where face image, the target human face data to match with facial image is obtained from default face database;Based on the mesh
Human face data is marked, the reparation of face clarity is carried out to video image, target video image is obtained and shows, do not increasing opposite end
Flow and power consumption and under the premise of not changing opposite end hardware setting, promote the face image quality in video call process, promote picture
Clarity realizes HD video call.
It is a kind of structure chart of video conversation apparatus provided by the embodiments of the present application referring to Fig. 3, Fig. 3, for ease of description,
Illustrate only part relevant to the embodiment of the present application.
The video conversation apparatus includes: that the first acquisition module 301, second obtains module 302, repair module 303 and shows
Show module 304.
First obtains module 301, for obtaining the video image of target opposite end transmission;
Second obtains module 302, for according to the video image, including facial image in determining the video image
In the case where, the target human face data to match with the facial image is obtained from default face database;
Repair module 303 carries out face clarity to the video image and repairs for being based on the target human face data
It is multiple, obtain target video image;
Display module 304, for showing the target video image.
Wherein, the second acquisition module 302 is specifically used for:
Judge in address list with the presence or absence of the corresponding object contact person information in the target opposite end;
In the address list in the case where object contact person information corresponding there are the target opposite end, preset from described
In face database, Initial Face data associated with the object contact person information are obtained;
The Initial Face data are determined as to the target human face data to match with the facial image;
Wherein, the human face data in the presetting database respectively from the different contact informations in the address list one by one
It is corresponding.
The target human face data is three-dimensional face data;The repair module 303 is specifically used for:
Based on the facial image, determine that the face in the video image shows feature;
Feature is shown based on the face, is obtained from the three-dimensional face data and is shown that feature matches with the face
Face alternative site;
It obtains face corresponding with the face alternative site and replaces image;
Image is replaced according to the face, the facial image in the video image is replaced, is obtained described
Target video image;
Wherein, the corresponding face display resolution of the three-dimensional face data is clear greater than the display of the facial image
Degree.
Wherein, the device further include:
First database establishes module, for being acquired to the facial image in different video callings, obtains preliminary
Human face data;Screening meets the screening human face data of articulation index requirement from the preliminary human face data;It determines different
The contact information of opposite end corresponding to the video calling;The screening human face data is associated with the contact information
Storage, obtains the default face database.
Wherein, the device further include:
Second Database module is shown in the human face data acquisition interface for exporting human face data acquisition interface
It is shown with different contact informations;In the case where the trigger collection for receiving human face data inputs, pass through image capture device
Carry out human face data acquisition, wherein a trigger collection input corresponds to a contact information;It will collect
Human face data and the contact information be associated storage, obtain the default face database.
Wherein, the target opposite end is phone wrist-watch.
Video conversation apparatus provided by the embodiments of the present application can be realized each of the embodiment of above-mentioned video call method
Process, and identical technical effect can be reached, to avoid repeating, which is not described herein again.
Fig. 4 is a kind of structure chart of terminal provided by the embodiments of the present application.As shown in the drawing, the terminal 4 of the embodiment is wrapped
It includes: processor 40, memory 41 and being stored in the computer that can be run in the memory 41 and on the processor 40
Program 42.
Illustratively, the computer program 42 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 41, and are executed by the processor 40, to complete the application.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the computer program 42 in the terminal 4 is described.For example, the computer program 42 can be divided into
First acquisition module, the second acquisition module, repair module and display module, each module concrete function are as follows:
First obtains module, for obtaining the video image of target opposite end transmission;
Second obtains module, for according to the video image, including facial image in determining the video image
In the case of, the target human face data to match with the facial image is obtained from default face database;
Repair module carries out the reparation of face clarity to the video image, obtains for being based on the target human face data
To target video image;
Display module, for showing the target video image.
Wherein, the second acquisition module is specifically used for:
Judge in address list with the presence or absence of the corresponding object contact person information in the target opposite end;
In the address list in the case where object contact person information corresponding there are the target opposite end, preset from described
In face database, Initial Face data associated with the object contact person information are obtained;
The Initial Face data are determined as to the target human face data to match with the facial image;
Wherein, the human face data in the presetting database respectively from the different contact informations in the address list one by one
It is corresponding.
The target human face data is three-dimensional face data;The repair module is specifically used for:
Based on the facial image, determine that the face in the video image shows feature;
Feature is shown based on the face, is obtained from the three-dimensional face data and is shown that feature matches with the face
Face alternative site;
It obtains face corresponding with the face alternative site and replaces image;
Image is replaced according to the face, the facial image in the video image is replaced, is obtained described
Target video image;
Wherein, the corresponding face display resolution of the three-dimensional face data is clear greater than the display of the facial image
Degree.
Wherein, the device further include:
First database establishes module, for being acquired to the facial image in different video callings, obtains preliminary
Human face data;Screening meets the screening human face data of articulation index requirement from the preliminary human face data;It determines different
The contact information of opposite end corresponding to the video calling;The screening human face data is associated with the contact information
Storage, obtains the default face database.
Wherein, the device further include:
Second Database module is shown in the human face data acquisition interface for exporting human face data acquisition interface
It is shown with different contact informations;In the case where the trigger collection for receiving human face data inputs, pass through image capture device
Carry out human face data acquisition, wherein a trigger collection input corresponds to a contact information;It will collect
Human face data and the contact information be associated storage, obtain the default face database.
Wherein, the target opposite end is phone wrist-watch.
The terminal 4 can be desktop PC, notebook, palm PC and cloud server etc. and calculate equipment.Institute
Stating terminal 4 may include, but be not limited only to, processor 40, memory 41.It will be understood by those skilled in the art that Fig. 4 is only terminal
4 example, the not restriction of structure paired terminal 4 may include than illustrating more or fewer components, or the certain portions of combination
Part or different components, such as the terminal can also include input-output equipment, network access equipment, bus etc..
Alleged processor 40 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 41 can be the internal storage unit of the terminal 4, such as the hard disk or memory of terminal 4.It is described
Memory 41 is also possible to the External memory equipment of the terminal 4, such as the plug-in type hard disk being equipped in the terminal 4, intelligence
Storage card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card)
Deng.Further, the memory 41 can also both include the internal storage unit of the terminal 4 or set including external storage
It is standby.The memory 41 is for other programs and data needed for storing the computer program and the terminal.It is described to deposit
Reservoir 41 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed terminal and method can pass through others
Mode is realized.For example, terminal embodiment described above is only schematical, for example, the division of the module or unit,
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be with
In conjunction with or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed
Mutual coupling or direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING of device or unit or
Communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of video call method characterized by comprising
Obtain the video image that target opposite end is sent;
According to the video image, in the case where in determining the video image comprising facial image, from default human face data
The target human face data to match with the facial image is obtained in library;
Based on the target human face data, the reparation of face clarity is carried out to the video image, obtains target video image;
Show the target video image.
2. video call method according to claim 1, which is characterized in that
It is described that the target human face data to match with the facial image is obtained from default face database, comprising:
Judge in address list with the presence or absence of the corresponding object contact person information in the target opposite end;
In the address list in the case where object contact person information corresponding there are the target opposite end, from the default face
In database, Initial Face data associated with the object contact person information are obtained;
The Initial Face data are determined as to the target human face data to match with the facial image;
Wherein, the human face data in the presetting database is a pair of from the different contact informations one in the address list respectively
It answers.
3. video call method according to claim 1, which is characterized in that the target human face data is three-dimensional face number
According to;It is described to be based on the target human face data, the reparation of face clarity is carried out to the video image, obtains target video figure
Picture, comprising:
Based on the facial image, determine that the face in the video image shows feature;
Feature is shown based on the face, and the people to match with face display feature is obtained from the three-dimensional face data
Face alternative site;
It obtains face corresponding with the face alternative site and replaces image;
Image is replaced according to the face, the facial image in the video image is replaced, the target is obtained
Video image;
Wherein, the corresponding face display resolution of the three-dimensional face data is greater than the display resolution of the facial image.
4. video call method according to claim 1, which is characterized in that it is described according to the video image, in determination
Comprising obtaining from default face database and matching with the facial image in the case where facial image in the video image
Target human face data before, further includes:
Facial image in different video callings is acquired, preliminary human face data is obtained;
Screening meets the screening human face data of articulation index requirement from the preliminary human face data;
Determine the contact information of opposite end corresponding to the different video callings;
The screening human face data and the contact information are associated storage, obtain the default face database.
5. video call method according to claim 1, which is characterized in that it is described according to the video image, in determination
Comprising obtaining from default face database and matching with the facial image in the case where facial image in the video image
Target human face data before, further includes:
Human face data acquisition interface is exported, shows different contact informations in the human face data acquisition interface;
In the case where the trigger collection for receiving human face data inputs, human face data acquisition is carried out by image capture device,
Wherein, a trigger collection input corresponds to a contact information;
The human face data collected and the contact information are associated storage, obtain the default face database.
6. video call method according to claim 1, which is characterized in that the target opposite end is phone wrist-watch.
7. a kind of video conversation apparatus characterized by comprising
First obtains module, for obtaining the video image of target opposite end transmission;
Second obtains module, for according to the video image, the case where in determining the video image including facial image
Under, the target human face data to match with the facial image is obtained from default face database;
Repair module carries out the reparation of face clarity to the video image, obtains mesh for being based on the target human face data
Mark video image;
Display module, for showing the target video image.
8. video conversation apparatus according to claim 7, which is characterized in that the second acquisition module is specifically used for:
Judge in address list with the presence or absence of the corresponding object contact person information in the target opposite end;
In the address list in the case where object contact person information corresponding there are the target opposite end, from the default face
In database, Initial Face data associated with the object contact person information are obtained;
The Initial Face data are determined as to the target human face data to match with the facial image;
Wherein, the human face data in the presetting database is a pair of from the different contact informations one in the address list respectively
It answers.
9. a kind of terminal, including memory, processor and storage can be run in the memory and on the processor
Computer program, which is characterized in that the processor is realized when executing the computer program as claim 1 to 6 is any
The step of item the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 6 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910561823.4A CN110266994B (en) | 2019-06-26 | 2019-06-26 | Video call method, video call device and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910561823.4A CN110266994B (en) | 2019-06-26 | 2019-06-26 | Video call method, video call device and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110266994A true CN110266994A (en) | 2019-09-20 |
CN110266994B CN110266994B (en) | 2021-03-26 |
Family
ID=67921838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910561823.4A Active CN110266994B (en) | 2019-06-26 | 2019-06-26 | Video call method, video call device and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110266994B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110602403A (en) * | 2019-09-23 | 2019-12-20 | 华为技术有限公司 | Method for taking pictures under dark light and electronic equipment |
CN111031241A (en) * | 2019-12-09 | 2020-04-17 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and computer readable storage medium |
CN111432154A (en) * | 2020-03-30 | 2020-07-17 | 维沃移动通信有限公司 | Video playing method, video processing method and electronic equipment |
CN111698553A (en) * | 2020-05-29 | 2020-09-22 | 维沃移动通信有限公司 | Video processing method and device, electronic equipment and readable storage medium |
WO2021109678A1 (en) * | 2019-12-04 | 2021-06-10 | 深圳追一科技有限公司 | Video generation method and apparatus, electronic device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100041061A (en) * | 2008-10-13 | 2010-04-22 | 성균관대학교산학협력단 | Video telephony method magnifying the speaker's face and terminal using thereof |
CN107566653A (en) * | 2017-09-22 | 2018-01-09 | 维沃移动通信有限公司 | A kind of call interface methods of exhibiting and mobile terminal |
CN107623832A (en) * | 2017-09-11 | 2018-01-23 | 广东欧珀移动通信有限公司 | Video background replacement method, device and mobile terminal |
CN108174141A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | A kind of method of video communication and a kind of mobile device |
CN108683872A (en) * | 2018-08-30 | 2018-10-19 | Oppo广东移动通信有限公司 | Video call method, device, storage medium and mobile terminal |
US20190188453A1 (en) * | 2017-12-15 | 2019-06-20 | Hyperconnect, Inc. | Terminal and server for providing video call service |
-
2019
- 2019-06-26 CN CN201910561823.4A patent/CN110266994B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100041061A (en) * | 2008-10-13 | 2010-04-22 | 성균관대학교산학협력단 | Video telephony method magnifying the speaker's face and terminal using thereof |
CN107623832A (en) * | 2017-09-11 | 2018-01-23 | 广东欧珀移动通信有限公司 | Video background replacement method, device and mobile terminal |
CN107566653A (en) * | 2017-09-22 | 2018-01-09 | 维沃移动通信有限公司 | A kind of call interface methods of exhibiting and mobile terminal |
CN108174141A (en) * | 2017-11-30 | 2018-06-15 | 维沃移动通信有限公司 | A kind of method of video communication and a kind of mobile device |
US20190188453A1 (en) * | 2017-12-15 | 2019-06-20 | Hyperconnect, Inc. | Terminal and server for providing video call service |
CN108683872A (en) * | 2018-08-30 | 2018-10-19 | Oppo广东移动通信有限公司 | Video call method, device, storage medium and mobile terminal |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110602403A (en) * | 2019-09-23 | 2019-12-20 | 华为技术有限公司 | Method for taking pictures under dark light and electronic equipment |
WO2021057277A1 (en) * | 2019-09-23 | 2021-04-01 | 华为技术有限公司 | Photographing method in dark light and electronic device |
WO2021109678A1 (en) * | 2019-12-04 | 2021-06-10 | 深圳追一科技有限公司 | Video generation method and apparatus, electronic device, and storage medium |
CN111031241A (en) * | 2019-12-09 | 2020-04-17 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and computer readable storage medium |
CN111031241B (en) * | 2019-12-09 | 2021-08-27 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and computer readable storage medium |
CN111432154A (en) * | 2020-03-30 | 2020-07-17 | 维沃移动通信有限公司 | Video playing method, video processing method and electronic equipment |
CN111432154B (en) * | 2020-03-30 | 2022-01-25 | 维沃移动通信有限公司 | Video playing method, video processing method and electronic equipment |
CN111698553A (en) * | 2020-05-29 | 2020-09-22 | 维沃移动通信有限公司 | Video processing method and device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110266994B (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110266994A (en) | Video call method, video call device and terminal | |
CN108961279A (en) | Image processing method, device and mobile terminal | |
CN109144647B (en) | Form design method and device, terminal equipment and storage medium | |
CN108765340A (en) | Fuzzy image processing method, apparatus and terminal device | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN109345553A (en) | A kind of palm and its critical point detection method, apparatus and terminal device | |
CN109782962A (en) | A kind of projection interactive method, device, system and terminal device | |
CN110457963B (en) | Display control method, display control device, mobile terminal and computer-readable storage medium | |
CN108898549A (en) | Image processing method, picture processing unit and terminal device | |
CN107193598A (en) | Application starting method, mobile terminal and computer readable storage medium | |
CN108874134A (en) | Eyeshield mode treatment method, mobile terminal and computer readable storage medium | |
CN110297973A (en) | A kind of data recommendation method based on deep learning, device and terminal device | |
CN109118447A (en) | A kind of image processing method, picture processing unit and terminal device | |
CN109376645A (en) | A kind of face image data preferred method, device and terminal device | |
CN110503409B (en) | Information processing method and related device | |
CN111858951A (en) | Learning recommendation method and device based on knowledge graph and terminal equipment | |
CN109144370A (en) | A kind of screenshotss method, apparatus, terminal and computer-readable medium | |
CN108769545A (en) | A kind of image processing method, image processing apparatus and mobile terminal | |
CN107506494B (en) | Document handling method, mobile terminal and computer readable storage medium | |
CN109359582A (en) | Information search method, information search device and mobile terminal | |
CN115984126A (en) | Optical image correction method and device based on input instruction | |
CN108520063A (en) | Processing method, device and the terminal device of event log | |
CN111597936A (en) | Face data set labeling method, system, terminal and medium based on deep learning | |
CN109544587A (en) | A kind of FIG pull handle method, apparatus and terminal device | |
CN113986428B (en) | Picture correction method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211105 Address after: 523000 room 1301, building 1, No. 28, Chang'an Dongmen Middle Road, Chang'an Town, Dongguan City, Guangdong Province Patentee after: Dongguan Bubugao Education Software Co.,Ltd. Address before: 523860 No. 168 Dongmen Middle Road, Xiaobian Community, Chang'an Town, Dongguan City, Guangdong Province Patentee before: Guangdong GENIUS Technology Co., Ltd. |