WO2021179923A1 - 用户面部图像展示方法、展示装置及对应的存储介质 - Google Patents

用户面部图像展示方法、展示装置及对应的存储介质 Download PDF

Info

Publication number
WO2021179923A1
WO2021179923A1 PCT/CN2021/078312 CN2021078312W WO2021179923A1 WO 2021179923 A1 WO2021179923 A1 WO 2021179923A1 CN 2021078312 W CN2021078312 W CN 2021078312W WO 2021179923 A1 WO2021179923 A1 WO 2021179923A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
frame
facial image
facial
feature points
Prior art date
Application number
PCT/CN2021/078312
Other languages
English (en)
French (fr)
Inventor
陈丹
马睿
熊垚森
Original Assignee
深圳看到科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳看到科技有限公司 filed Critical 深圳看到科技有限公司
Publication of WO2021179923A1 publication Critical patent/WO2021179923A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the technical field of image processing, and in particular to a method, a display device and a corresponding storage medium for displaying a user's facial image.
  • both parties of the conversation can only see the user who is being captured by the camera. If the position of the user who is talking has changed, the position of the user can generally be refreshed through image recognition technology, so as to realize the tracking and shooting of the user who is talking. .
  • the user's facial image may not be effectively recognized, and thus the user's facial image cannot be accurately rendered and displayed.
  • the embodiments of the present invention provide a user facial image display method and display device that effectively recognize a user facial image based on a facial feature frame, which can effectively recognize user facial images at different distances from a camera, and realize accurate rendering of user facial images.
  • the display operation effectively solves the technical problem that the existing user facial image display method and display device cannot accurately render and display the user's facial image.
  • the embodiment of the present invention provides a method for displaying a facial image of a user, which includes:
  • the user's facial image of the user is rendered and displayed.
  • the panoramic picture frame includes picture position information
  • the step of performing a face recognition operation on the panoramic image frame to obtain the facial feature points of each user in the panoramic image frame includes:
  • the picture position information using the end picture at one end of the panoramic picture frame to perform picture expansion on the picture frame edge at the other end of the panoramic picture frame to obtain an expanded panoramic picture frame;
  • the face feature points with the same screen position information are sorted.
  • the display angle of the panoramic image frame corresponding to the end image is 10 to 30 degrees.
  • the step of performing repetition processing on facial feature points with the same screen position information includes:
  • the second face feature point will be deleted; otherwise, the first face feature point will be deleted.
  • the facial feature points are deleted.
  • the weight coefficient of the related feature points of the first face feature point is determined according to the category, quality and quantity of the related feature points of the first face feature point;
  • the weight coefficient of the related feature points of the second face feature point is determined according to the category, quality and quantity of the related feature points of the second face feature point.
  • the method before the second face feature point is deleted, the method further includes the step of: using the related feature points of the second face feature point to compare the first person The related feature points of the face feature points are corrected so that the weight coefficient of the related feature points of the first face feature point after correction is the largest;
  • the method further includes the step of using the relevant feature point of the first face feature point to correct the relevant feature point of the second face feature point, so as to make the correction
  • the weight coefficient of the related feature point of the second face feature point is the largest.
  • the step of determining the corresponding facial feature frame based on the facial feature points includes:
  • the face feature frame is determined according to the position of the set feature point and the frame size of the face feature frame.
  • the step of determining the corresponding facial feature frame based on the facial feature points includes:
  • the frame size of the face feature frame is determined based on the length of the line between the set feature points, and the face feature is determined based on the angle between the line between the set feature points and the reference plane
  • the frame deflection angle of the frame is determined based on the length of the line between the set feature points, and the face feature is determined based on the angle between the line between the set feature points and the reference plane.
  • the face feature frame is determined according to the position of the set feature point, the frame size of the face feature frame, and the frame deflection angle of the face feature frame.
  • the step of rendering and displaying the user facial image of the user according to the preset display size of the user facial image of the user and the shooting distance of the user include:
  • the user facial image display method further includes:
  • the embodiment of the present invention also provides a user facial image display device for displaying panoramic images, wherein the user facial image display device includes:
  • the facial feature point acquisition module is configured to acquire a panoramic picture frame of the current scene; perform a face recognition operation on the panoramic picture frame to obtain the facial feature points of each user in the panoramic picture frame;
  • a shooting distance determination module configured to determine a corresponding face feature frame based on the face feature points, and determine the user shooting distance of the user according to the face feature frame;
  • the facial image rendering and display module is configured to perform rendering and display operations on the user's facial image of the user according to the preset display size of the user's facial image and the user's shooting distance.
  • the panoramic picture frame includes picture position information
  • the facial feature point acquisition module is configured to use the end picture at one end of the panoramic picture frame to expand the picture frame edge at the other end of the panoramic picture frame according to the picture position information to obtain the expanded picture.
  • the face feature points with the same screen position information are sorted.
  • the shooting distance determining module is used to obtain at least two set feature points of the facial feature points
  • the face feature frame is determined according to the position of the set feature point and the frame size of the face feature frame.
  • the shooting distance determining module is used to obtain at least two set feature points of the facial feature points
  • the frame size of the face feature frame is determined based on the length of the line between the set feature points, and the face feature is determined based on the angle between the line between the set feature points and the reference plane
  • the frame deflection angle of the frame is determined based on the length of the line between the set feature points, and the face feature is determined based on the angle between the line between the set feature points and the reference plane.
  • the face feature frame is determined according to the position of the set feature point, the frame size of the face feature frame, and the frame deflection angle of the face feature frame.
  • the facial image rendering display module is configured to obtain the user facial image collection area of the user according to the shooting distance of the user;
  • An embodiment of the present invention also provides a computer-readable storage medium, in which processor-executable instructions are stored, and the instructions are loaded by one or more processors to execute any of the aforementioned user facial image display methods.
  • the user facial image display method and display device of the present invention determine the user facial image based on the facial feature frame, and ensure that the facial image of the user at different distances in the panoramic image is displayed. Effective recognition, thereby realizing the accurate rendering and display operation of the user's facial image; effectively solves the technical problem that the existing user facial image display method and display device cannot accurately render and display the user's facial image.
  • FIG. 1 is a flowchart of an embodiment of a method for displaying a user's facial image according to the present invention
  • FIG. 2 is a flowchart of the process of acquiring facial feature points in an embodiment of the user facial image display method of the present invention
  • FIG. 3 is a flowchart of the repetition processing of facial feature points in an embodiment of the user facial image display method of the present invention
  • FIG. 4 is a flowchart of a process of acquiring a facial feature frame in an embodiment of a method for displaying a user's facial image of the present invention
  • FIG. 5 is a flowchart of step S103 of an embodiment of a method for displaying a user's facial image of the present invention
  • FIG. 6 is a schematic structural diagram of an embodiment of a user facial image display device of the present invention.
  • FIG. 7 is a flow chart of the user's facial image display method and the user's facial image display device of the present invention for displaying the user's facial image and the panoramic image;
  • FIGS. 8a-8c are schematic diagrams of the user's facial image display method and the user's facial image display device of the present invention for displaying the user's facial image and the panoramic image;
  • FIG. 9 is a schematic diagram of the working environment structure of the electronic device where the user facial image display apparatus of the present invention is located.
  • the user facial image display method and display device of the present invention are used for electronic equipment that effectively displays panoramic images, especially user facial images in panoramic images.
  • This electronic device includes but is not limited to wearable devices, head-mounted devices, medical and health platforms, personal computers, server computers, handheld or laptop devices, mobile devices (such as mobile phones, personal digital assistants (PDA), media players) Etc.), multi-processor systems, consumer electronic devices, small computers, large computers, distributed computing environments including any of the above systems or devices, etc.
  • the electronic device is preferably an electronic terminal that receives a panoramic image and displays the panoramic image through a display screen. That is, the user can view the panoramic image information captured by the panoramic camera in real time through a fixed terminal or a mobile terminal, such as a user image of a meeting scene.
  • FIG. 1 is a flowchart of an embodiment of a user facial image display method of the present invention.
  • the user facial image display method of this embodiment can be implemented using the above-mentioned electronic device.
  • the user facial image display method of this embodiment includes:
  • Step S101 Obtain a panoramic picture frame of the current scene; perform a face recognition operation on the panoramic picture frame to obtain facial feature points of each user in the panoramic picture frame;
  • Step S102 Determine the corresponding face feature frame based on the face feature points, and determine the user shooting distance of the user according to the face feature frame;
  • Step S103 Perform rendering and display operations on the user's facial image according to the preset display size of the user's facial image and the user's shooting distance.
  • the user's facial image display device obtains a panoramic image frame of the current scene captured by the panoramic camera.
  • the panoramic picture frame includes picture information within a 360-degree range centered on the panoramic camera.
  • the panoramic picture frame includes picture content information and picture position information used to represent position information corresponding to the picture content information (such as information corresponding to the user's facial image). That is, the panoramic picture frame includes picture content pixels and pixel position information corresponding to the picture content pixels.
  • FIG. 2 is a flowchart of the process of acquiring facial feature points in an embodiment of the method for displaying a user's facial image of the present invention.
  • the acquisition process includes:
  • step S201 the user facial image display device uses the end image at one end of the panoramic image frame to expand the image frame edge at the other end of the panoramic image frame according to the image position information to obtain the expanded panoramic image frame;
  • the user's facial image may be located on the end images at both ends of the panoramic image frame, where the end image refers to the image image at one end of the panoramic image frame, such as the corresponding panoramic camera 0 degrees to 30 degrees Range of screen images and corresponding panoramic camera 330 degrees to 360 degrees range of screen images.
  • the user facial image display device uses the end picture at one end of the panoramic picture frame to expand the picture frame edge at the other end of the panoramic picture frame to obtain the expanded panoramic picture frame.
  • the shooting angle of the panoramic camera corresponding to the expanded panoramic image frame is about 370-390 degrees, of which there are roughly 10 degrees (for example, the shooting angle is 370 degrees) to 30 degrees (for example, the shooting angle is 390 degrees).
  • the content which can effectively enable users at the edge of the panoramic image frame to perform a complete display in the end image at one end of the expanded panoramic image frame.
  • Step S202 the user's facial image display device performs a face recognition operation on the expanded panoramic image frame, that is, recognizes facial feature points such as eyes, mouth, and nose of the person in the panoramic image frame, so as to obtain the expanded panoramic image frame
  • the facial feature points of each user and the screen position information corresponding to the facial feature points are included in the expanded panoramic image frame.
  • Step S203 since the end pictures of the expanded panoramic picture frame may have repeated user facial images, in order to avoid repeated recognition or invalid recognition of the user facial images in the panoramic picture frame, when the panoramic picture frames have the same picture position
  • the user facial image display device performs repetition processing on facial feature points with the same screen position information, thereby improving the accuracy of the user's facial image in the end screen.
  • FIG. 3 is a flowchart of the re-election processing of facial feature points in an embodiment of the user facial image display method of the present invention.
  • the facial feature points with the same screen position information are respectively set as the first facial feature point and the second facial feature point.
  • the first face feature point is located in the end image at one end of the panoramic image frame, and the second face feature point is located in the end image at the other end of the panoramic image frame.
  • the process of re-processing the face feature points includes:
  • Step S301 The user facial image display device determines the relevant feature points of the first facial feature point within a set range. If the first facial feature point is a human eye feature point, then the relevant feature point of the human eye feature point is around the human eye Image feature points (including human eye feature points) within the set range. Since the feature points in the end image may have low accuracy (such as feature points are blurred or even missing), it is necessary to quantify the accuracy of the first face feature points through related feature points, such as related feature points The missing is more severe, and the accuracy of the corresponding human eye feature points will also be lower.
  • the user facial image display device determines the relevant feature points of the second facial feature points having the same screen position information as the first facial feature points within the set range.
  • Step S302 if the weight coefficient of the related feature point of the first face feature point is greater than or equal to the weight coefficient of the related feature point of the second face feature point, the user facial image display device will display the second face feature point and the corresponding correlation The feature point is deleted, otherwise the user facial image display device will delete the first face feature point and the corresponding related feature point.
  • the weight coefficient of the related feature points of the first face feature point is determined by the category, quality, and quantity of the related feature points of the first face feature point.
  • the weight coefficient of related feature points is relatively large.
  • the weight coefficient of the related feature points of the points is larger.
  • the weight coefficient of is larger.
  • the user facial image display device determines the weight coefficients of the related feature points of the first facial feature points based on the category, quality, and quantity of the related feature points of the first facial feature points; meanwhile, the user facial image display device is based on the second facial feature points.
  • the category, quality and quantity of the related feature points of the feature points determine the weight coefficients of the related feature points of the second face feature point. Thereby, it is further determined to perform a deletion operation on the first face feature point and the corresponding related feature point or perform a deletion operation on the related feature point corresponding to the second face feature click.
  • the user facial image display device deletes the first facial feature point, it can also use the related feature points of the first facial feature point to correct the related feature points of the second facial feature point, so that The weight coefficient of the related feature points of the corrected second face feature point is the largest, that is, the category, quality, and quantity of the related feature points of the revised second face feature point are all the best.
  • the user facial image display device can also use the related feature point of the second face feature point to correct the related feature point of the first face feature point, so as to make the correction
  • the weight coefficient of the related feature points of the first face feature point after the correction is the largest, that is, the category, quality, and quantity of the related feature points of the first face feature point after correction are all the best.
  • step S102 the user facial image display device determines the corresponding facial feature frame based on the facial feature points determined in step S101.
  • FIG. 4 is a flowchart of a process of acquiring a facial feature frame in an embodiment of a method for displaying a facial image of a user according to the present invention.
  • the acquisition process includes:
  • Step S401 Obtain at least two set feature points among the facial feature points
  • the face feature frame set in this embodiment is a square frame.
  • the frame size here is used to represent the rectangular frame or square of the face feature frame.
  • the size of the frame indicates the size of the user's face; the frame deflection angle is used to indicate the deflection angle of the rectangular frame or the square frame of the facial feature frame, to indicate the tilt angle of the user's face to the left and right.
  • the user facial image display device acquires at least two set feature points of the facial feature points in step S101, such as the user's eyes or feature points such as eyes and nose.
  • the user's facial image display device can detect multiple set feature points at the same time, and then select the one that can accurately feedback the user's facial image position
  • the two set feature points are used as the set feature points for subsequent determination of the face feature frame.
  • the accuracy of the set feature points can be determined by detecting the distance between two set feature points or the number of related feature points of the set feature points.
  • the eyes and nose can be used as the set feature points, or The eyes and mouth are the characteristic points. If the user wears a mask, the number of related feature points such as the user’s nose and chin is less. If the chin cannot be detected, or the nose has too few related feature points, the eyes, or eyes and ears can be used as Set feature points. If the user makes a beeping action, which causes the distance between the mouth and the nose to be too close, two ears or eyes can be used as the set feature points.
  • step S402 the user facial image display device determines the frame size of the facial feature frame based on the length of the line between the set feature points acquired in step S301.
  • connection between the set feature points (such as the connection between the two eyes) will be larger, and then the frame size of the corresponding facial feature frame will be larger.
  • the frame deflection angle of the face feature frame can also be determined based on the angle between the line between the set feature points and the reference plane.
  • the line between the user’s eyes or ears is basically parallel to the horizontal plane
  • the line between the user’s nose, mouth and chin and nose is basically perpendicular to the horizontal plane
  • the angle between the line between the ear and the eye on the same side and the horizontal plane Roughly 5 degrees to 15 degrees.
  • the deflection angle of the face feature frame can be determined based on the angle between the line between the set feature points and the reference plane (horizontal plane or vertical plane). If the angle between the line of the user's eyes and the horizontal plane is 20 degrees, the frame deflection angle of the face feature frame is also 20 degrees.
  • step S403 the user facial image display device determines the face feature frame according to the position of the set feature point determined in step S401 and the frame size of the face feature frame determined in step S402.
  • the user’s facial image display device determines the user’s shooting distance according to the facial feature frame.
  • the user’s shooting distance is mainly determined based on the frame size of the face feature frame, that is, the larger the frame size of the face feature frame, the user takes The smaller the distance; the smaller the frame size of the face feature frame, the larger the user's shooting distance.
  • the user facial image display device can also set the angle of the facial feature frame according to the frame deflection angle obtained in step S402, so that the subsequent display angle adjustment of the user's facial image in the facial feature frame can be performed to the user The face image of the face is displayed.
  • step S103 since the user facial image display device can use a fixed user display position to display the user facial image, the user facial image needs to be unified to a preset display size in order to display the user facial image.
  • FIG. 5 is a flowchart of step S103 of an embodiment of a method for displaying a user's facial image of the present invention.
  • This step S103 includes:
  • step S501 the user facial image display device obtains the user's facial image collection area of the user according to the user's shooting distance obtained in step S102, that is, the image area corresponding to the user feature frame.
  • step S502 the user facial image display device collects a user facial image in the user facial image collection area, that is, an image corresponding to the user feature frame.
  • step S503 the user facial image display device performs a zoom operation on the user facial image collected in step S502 based on the preset display size of the user facial image, so that the user facial image can be displayed by fixing the user display and can be based on the frame deflection Angle Adjust the angle of the user's face image.
  • the fixed user display position here is a display position set at a fixed position on the terminal screen and used to track and display a certain user's facial image.
  • the user facial image display device performs rendering and display operations on the user's facial image after the zoom operation in step S503.
  • the user's facial image display device can also use the panoramic image display position to display the 360-degree panoramic image frame or the expanded panoramic image frame, that is, use the fixed user display position to display specific users, and use the panoramic image display position to display the entire Panoramic picture for display.
  • the user facial image display method of this embodiment determines the user facial image based on the facial feature frame, ensures effective recognition of the user facial image at different distances in the panoramic image, and further realizes the accurate rendering and display operation of the user facial image.
  • FIG. 6 is a schematic structural diagram of an embodiment of the user facial image display device of the present invention.
  • the user facial image display device of this embodiment can be implemented using the above-mentioned user facial image display method.
  • the user facial image display device 60 of this embodiment includes a facial feature point acquisition module 61, a shooting distance determination module 62, and a facial image rendering display. Module 63.
  • the facial feature point acquisition module 61 is used to acquire the panoramic image frame of the current scene; perform face recognition operations on the panoramic image frame to obtain the facial feature points of each user in the panoramic image frame; the shooting distance determination module 62 is used to The face feature points determine the corresponding face feature frame, and determine the user's user shooting distance according to the face feature frame; the facial image rendering and display module 63 is used to display the user's user facial image according to the preset display size of the user's facial image and the user's shooting distance. The user's facial image of the user is rendered and displayed.
  • the facial feature point acquisition module 61 acquires the panoramic image frame of the current scene captured by the panoramic camera.
  • the panoramic picture frame includes picture information within a 360-degree range centered on the panoramic camera.
  • the panoramic picture frame includes picture content information and picture position information for indicating position information corresponding to the picture content information.
  • the facial feature point acquisition module 61 performs a face recognition operation on the acquired panoramic image frame to obtain the facial feature points of each user in the panoramic image frame.
  • the facial feature points here are facial features such as eyes, mouth, and nose.
  • the shooting distance determining module 62 determines the corresponding face feature frame based on the facial feature points; then the shooting distance determining module 62 determines the user's user shooting distance according to the face feature frame, here mainly based on the frame size of the face feature frame To determine the user's shooting distance, that is, the larger the frame size of the face feature frame, the smaller the user's shooting distance; the smaller the frame size of the face feature frame, the greater the user's shooting distance.
  • the user facial image display device 60 can use a fixed user display position to display the user facial image, it is necessary to unify the user facial image to a preset display size in order to display the user facial image.
  • the facial image rendering and display module 63 adjusts, renders, and displays the user's facial image in the facial feature frame according to the preset display size of the user's facial image and the user's shooting distance.
  • the facial image rendering display module 63 can also use the panoramic image display position to display the 360-degree panoramic image frame or the expanded panoramic image frame, that is, use the fixed user display position to display specific users, and use the panoramic image display position to display The entire panoramic picture is displayed.
  • Figure 7 is a flow chart of the user facial image display method and user facial image display device of the present invention for displaying user facial images and panoramic images.
  • Figures 8a-8c are the present invention.
  • the user’s facial image display device of this embodiment is set on an electronic terminal with a display screen to display the panoramic picture information captured by the panoramic camera in real time and the user’s facial information in the panoramic picture.
  • the panoramic picture includes a fixed user display position and a panoramic view.
  • the picture display position as specifically shown in FIG. 8c, the panoramic picture displayed by the electronic terminal includes fixed user display positions 801, 802, 803, and 804 and a panoramic picture display position 805.
  • the display process of the panoramic image includes:
  • step S701 the electronic terminal obtains a panoramic picture frame of the current scene, and the panoramic picture frame may be as shown in FIG. 8a.
  • Step S702 since the user's face in the panoramic image frame may be located at the edge of the panoramic image frame, the electronic terminal uses the end image at one end of the panoramic image frame to perform image expansion on the edge of the image frame at the other end of the panoramic image frame.
  • the resulting panoramic picture frame is shown in Figure 8b.
  • Step S703 The electronic terminal performs a face recognition operation on the expanded panoramic image frame to obtain the facial feature points of each user in the expanded panoramic image frame, such as user a, user b, user c, and user in Figure 8b.
  • the face feature point of d where user d is at the edge of the panoramic image frame, so user d has the first face feature point in the end image at one end of the panoramic image frame (the right end of Figure 8b) and the first face feature point in the panoramic image frame The second face feature point in the end picture at the other end (the left end of Fig. 8b).
  • step S704 since the weight coefficient of the first face feature point of the user d is greater than the weight coefficient of the second face feature point, the electronic terminal deletes the first face feature point of the user d.
  • step S705 the electronic terminal determines the corresponding face feature boxes based on the facial feature points of the user a, the user b, the user c, and the user d, specifically the face feature boxes 806, 807, 808, and 809 in FIG. 8b.
  • Step S706 The electronic terminal determines the user shooting distance of each user according to the frame size of the face feature frame, and performs a correction operation on the face feature frame according to the frame deflection angle of the face feature frame.
  • Step S707 The electronic terminal performs a zooming operation on the user's facial image of each user according to the preset display size of the user's facial image and the user's shooting distance, so that the user's facial image after the zooming operation meets the preset display size.
  • step S708 the electronic terminal uses the fixed user display positions 801, 802, 803, and 804 to display the user's facial image after the zoom operation, and the size of the user's facial image of each user is the preset display size; at the same time, the electronic terminal uses the panoramic view
  • the picture display position 805 displays the 360-degree panoramic picture frame or the expanded panoramic picture frame, so as to view the specific positions of the user a, the user b, the user c, and the user d in the panoramic picture frame. The details are shown in Figure 8c.
  • the user facial image display method and display device of the present invention determine the user facial image based on the facial feature frame, ensure the effective recognition of the user facial image at different distances in the panoramic picture, and then realize the accurate rendering and display operation of the user's facial image; This effectively solves the technical problem that the existing user facial image display method and display device cannot accurately render and display the user's facial image.
  • a component may be, but is not limited to, a process, a processor, an object, an executable application, a thread of execution, a program, and/or a computer running on a processor.
  • a component may be, but is not limited to, a process, a processor, an object, an executable application, a thread of execution, a program, and/or a computer running on a processor.
  • the application running on the controller and the controller can be components.
  • One or more components may exist in an executing process and/or thread, and the components may be located on one computer and/or distributed between two or more computers.
  • Example electronic devices 912 include, but are not limited to, wearable devices, head-mounted devices, medical and health platforms, personal computers, server computers, handheld or laptop devices, mobile devices (such as mobile phones, personal digital assistants (PDAs), media players) Etc.), multi-processor systems, consumer electronic devices, small computers, large computers, distributed computing environments including any of the above systems or devices, etc.
  • Computer readable instructions can be distributed via computer readable media (discussed below).
  • Computer readable instructions can be implemented as program modules, such as functions, objects, application programming interfaces (APIs), data structures, etc. that perform specific tasks or implement specific abstract data types.
  • program modules such as functions, objects, application programming interfaces (APIs), data structures, etc. that perform specific tasks or implement specific abstract data types.
  • APIs application programming interfaces
  • data structures such as lists, etc. that perform specific tasks or implement specific abstract data types.
  • the functions of the computer-readable instructions can be freely combined or distributed in various environments.
  • FIG. 9 illustrates an example of an electronic device 912 including one or more embodiments of the user facial image display apparatus of the present invention.
  • the electronic device 912 includes at least one processing unit 916 and a memory 918.
  • the memory 918 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. This configuration is illustrated by the dashed line 914 in FIG. 9.
  • the electronic device 912 may include additional features and/or functions.
  • the device 912 may also include additional storage devices (for example, removable and/or non-removable), which include, but are not limited to, magnetic storage devices, optical storage devices, and the like.
  • additional storage devices for example, removable and/or non-removable
  • Such an additional storage device is illustrated by the storage device 920 in FIG. 9.
  • computer-readable instructions for implementing one or more embodiments provided herein may be in the storage device 920.
  • the storage device 920 may also store other computer-readable instructions for implementing an operating system, application programs, and the like.
  • the computer-readable instructions can be loaded into the memory 918 and executed by the processing unit 916, for example.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storing information such as computer readable instructions or other data.
  • the memory 918 and the storage device 920 are examples of computer storage media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical storage devices, cassette tapes, magnetic tapes, disk storage devices or other magnetic storage devices, Or any other medium that can be used to store desired information and can be accessed by the electronic device 912. Any such computer storage medium may be part of the electronic device 912.
  • the electronic device 912 may also include a communication connection 926 that allows the electronic device 912 to communicate with other devices.
  • the communication connection 926 may include, but is not limited to, a modem, a network interface card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting the electronic device 912 to other electronic devices.
  • the communication connection 926 may include a wired connection or a wireless connection.
  • the communication connection 926 can transmit and/or receive communication media.
  • Computer-readable medium may include communication media.
  • Communication media typically contain computer-readable instructions or other data in a “modulated data signal” such as a carrier wave or other transmission mechanism, and include any information delivery media.
  • modulated data signal may include a signal in which one or more of the characteristics of the signal is set or changed in a manner that encodes information into the signal.
  • the electronic device 912 may include an input device 924, such as a keyboard, a mouse, a pen, a voice input device, a touch input device, an infrared camera, a video input device, and/or any other input device.
  • the device 912 may also include an output device 922, such as one or more displays, speakers, printers, and/or any other output devices.
  • the input device 924 and the output device 922 may be connected to the electronic device 912 via a wired connection, a wireless connection, or any combination thereof. In one embodiment, an input device or output device from another electronic device may be used as the input device 924 or output device 922 of the electronic device 912.
  • the components of the electronic device 912 may be connected through various interconnections (such as buses). Such interconnections may include Peripheral Component Interconnect (PCI) (such as PCI Express), Universal Serial Bus (USB), FireWire (IEEE 1394), optical bus structure, and so on.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • FireWire IEEE 1394
  • optical bus structure and so on.
  • the components of the electronic device 912 may be interconnected through a network.
  • the memory 918 may be composed of multiple physical memory units located in different physical locations and interconnected by a network.
  • storage devices used to store computer-readable instructions can be distributed across a network.
  • the electronic device 930 accessible via the network 928 may store computer-readable instructions for implementing one or more embodiments provided by the present invention.
  • the electronic device 912 can access the electronic device 930 and download part or all of the computer-readable instructions for execution.
  • the electronic device 912 may download multiple computer-readable instructions as needed, or some instructions may be executed at the electronic device 912 and some instructions may be executed at the electronic device 930.
  • the one or more operations described may constitute one or more computer-readable instructions stored on a computer-readable medium, which, when executed by an electronic device, will cause the computing device to perform the operations. Describing the order of some or all operations should not be interpreted as implying that these operations must be order-dependent. Those skilled in the art will understand alternative rankings that have the benefit of this description. Moreover, it should be understood that not all operations are necessarily present in every embodiment provided herein.
  • the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
  • Each of the above-mentioned devices or systems can execute the methods in the corresponding method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本发明提供一种用户面部图像展示方法,其包括: 获取当前场景的全景画面帧;并对全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点;基于人脸特征点确定对应的人脸特征框,并根据人脸特征框确定用户的用户拍摄距离;根据用户的用户面部图像的预设展示尺寸以及用户拍摄距离,对用户的用户面部图像进行渲染以及展示操作。本发明还提供一种用户面部图像展示方法及展示装置,本发明基于人脸特征框来确定用户面部图像,保证对全景画面中不同距离的用户面部图像的有效识别,进而实现了用户面部图像的准确渲染及展示操作。

Description

用户面部图像展示方法、展示装置及对应的存储介质 技术领域
本发明涉及图像处理技术领域,特别是涉及一种用户面部图像展示方法、展示装置及对应的存储介质。
背景技术
随着社会的发展,人们之间的交互越来越紧密,但是同一产品的不同部件的生产地却越来越分散,因此公司产品领导经常需要为了某个生产计划需要和各地的不同公司员工进行联系,或电话会议等。
现有的电话会议系统中对话双方只能看到摄像头正在拍摄的用户,如交谈的用户位置发生了变化,一般可通过图像识别技术对用户的位置进行刷新,从而实现对该交谈用户的跟踪拍摄。
技术问题
但是当用户与摄像头之间的距离发生较大变化时,可能无法对用户面部图像进行有效识别,进而无法对用户面部画面进行准确的渲染以及展示操作。
故,有必要提供一种用户面部图像展示方法及展示装置,以解决现有技术所存在的问题。
技术解决方案
本发明实施例提供一种基于人脸特征框对用户面部图像进行有效识别的用户面部图像展示方法及展示装置,可有效识别与摄像头不同距离的用户面部图像,实现了用户面部画面的准确渲染以及展示操作,有效解决了现有的用户面部图像展示方法及展示装置无法对用户面部画面进行准确的渲染以及展示操作的技术问题。
本发明实施例提供一种用户面部图像展示方法,其包括:
获取当前场景的全景画面帧;并对所述全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点;
基于所述人脸特征点确定对应的人脸特征框,并根据人脸特征框确定所述用户的用户拍摄距离;以及
根据所述用户的用户面部图像的预设展示尺寸以及所述用户拍摄距离,对所述用户的用户面部图像进行渲染以及展示操作。
在本发明所述的用户面部图像展示方法中,所述全景画面帧包括画面位置信息;
所述对所述全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点的步骤包括:
根据所述画面位置信息,使用所述全景画面帧的一端的端画面,对所述全景画面帧的另一端的画面帧边缘进行画面扩充,得到扩充后的全景画面帧;
对扩充后的全景画面帧进行人脸识别操作,以得到扩充后的全景画面帧中各个用户的人脸特征点;
当具有相同画面位置信息的人脸特征点时,对具有相同画面位置信息的人脸特征点进行排重处理。
在本发明所述的用户面部图像展示方法中,所述端画面对应的全景画面帧的展示角度为10度至30度。
在本发明所述的用户面部图像展示方法中,所述对具有相同画面位置信息的人脸特征点进行排重处理的步骤包括:
确定第一人脸特征点在设定范围内的相关特征点,并确定与所述第一人脸特征点具有相同画面位置信息的第二人脸特征点在设定范围内的相关特征点;
如第一人脸特征点的相关特征点的权重系数大于等于第二人脸特征点的相关特征点的权重系数,则将所述第二人脸特征点进行删除操作;否则对所述第一人脸特征点进行删除操作。
在本发明所述的用户面部图像展示方法中,根据所述第一人脸特征点的相关特征点的类别、质量以及数量,确定所述第一人脸特征点的相关特征点的权重系数;
根据所述第二人脸特征点的相关特征点的类别、质量以及数量,确定所述第二人脸特征点的相关特征点的权重系数。
在本发明所述的用户面部图像展示方法中,在将所述第二人脸特征点进行删除操作之前还包括步骤:使用所述第二人脸特征点的相关特征点对所述第一人脸特征点的相关特征点进行修正,以使得修正后的第一人脸特征点的相关特征点的权重系数最大;
在将所述第一人脸特征点进行删除操作之前还包括步骤:使用所述第一人脸特征点的相关特征点对所述第二人脸特征点的相关特征点进行修正,以使得修正后的第二人脸特征点的相关特征点的权重系数最大。
在本发明所述的用户面部图像展示方法中,所述基于所述人脸特征点确定对应的人脸特征框的步骤包括:
获取所述人脸特征点中的至少两个设定特征点;
基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸;
根据所述设定特征点的位置以及所述人脸特征框的框体尺寸确定所述人脸特征框。
在本发明所述的用户面部图像展示方法中,所述基于所述人脸特征点确定对应的人脸特征框的步骤包括:
获取所述人脸特征点中的至少两个设定特征点;
基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸,基于所述设定特征点之间连线的与基准面之间的角度确定所述人脸特征框的框体偏转角度;
根据所述设定特征点的位置、所述人脸特征框的框体尺寸、以及所述人脸特征框的框体偏转角度确定所述人脸特征框。
在本发明所述的用户面部图像展示方法中,所述根据所述用户的用户面部图像的预设展示尺寸以及所述用户拍摄距离,对所述用户的用户面部图像进行渲染以及展示操作的步骤包括:
根据所述用户拍摄距离,获取所述用户的用户面部图像采集区域;
在所述用户面部图像采集区域中采集用户面部图像;
基于用户面部图像的预设展示尺寸,对所述用户面部图像进行缩放操作,以便通过固定用户展示位对所述用户面部图像进行展示;
对缩放操作后的用户面部图像进行渲染以及展示操作。
在本发明所述的用户面部图像展示方法中,所述用户面部图像展示方法还包括:
使用全景画面展示位对扩充后的全景画面帧进行展示操作。
本发明实施例还提供一种用户面部图像展示装置,用于对全景画面进行展示操作,其中所述用户面部图像展示装置包括:
人脸特征点获取模块,用于获取当前场景的全景画面帧;并对所述全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点;
拍摄距离确定模块,用于基于所述人脸特征点确定对应的人脸特征框,并根据人脸特征框确定所述用户的用户拍摄距离;以及
面部图像渲染展示模块,用于根据所述用户的用户面部图像的预设展示尺寸以及所述用户拍摄距离,对所述用户的用户面部图像进行渲染以及展示操作。
在本发明所述的用户面部图像展示装置中,所述全景画面帧包括画面位置信息;
所述人脸特征点获取模块用于根据所述画面位置信息,使用所述全景画面帧的一端的端画面,对所述全景画面帧的另一端的画面帧边缘进行画面扩充,得到扩充后的全景画面帧;
对扩充后的全景画面帧进行人脸识别操作,以得到扩充后的全景画面帧中各个用户的人脸特征点;
当具有相同画面位置信息的人脸特征点时,对具有相同画面位置信息的人脸特征点进行排重处理。
在本发明所述的用户面部图像展示装置中,所述拍摄距离确定模块用于获取所述人脸特征点中的至少两个设定特征点;
基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸;
根据所述设定特征点的位置以及所述人脸特征框的框体尺寸确定所述人脸特征框。
在本发明所述的用户面部图像展示装置中,所述拍摄距离确定模块用于获取所述人脸特征点中的至少两个设定特征点;
基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸,基于所述设定特征点之间连线的与基准面之间的角度确定所述人脸特征框的框体偏转角度;
根据所述设定特征点的位置、所述人脸特征框的框体尺寸、以及所述人脸特征框的框体偏转角度确定所述人脸特征框。
在本发明所述的用户面部图像展示装置中,所述面部图像渲染展示模块用于根据所述用户拍摄距离,获取所述用户的用户面部图像采集区域;
在所述用户面部图像采集区域中采集用户面部图像;
基于用户面部图像的预设展示尺寸,对所述用户面部图像进行缩放操作,以便通过固定用户展示位对所述用户面部图像进行展示;
对缩放操作后的用户面部图像进行渲染以及展示操作。
本发明实施例还提供一种计算机可读存储介质,其内存储有处理器可执行指令,所述指令由一个或一个以上处理器加载,以执行上述任一用户面部图像展示方法。
有益效果
相较于现有技术的用户面部图像展示方法及展示装置,本发明的用户面部图像展示方法及展示装置基于人脸特征框来确定用户面部图像,保证对全景画面中不同距离的用户面部图像的有效识别,进而实现了用户面部图像的准确渲染及展示操作;有效解决了现有的用户面部图像展示方法及展示装置无法对用户面部画面进行准确的渲染以及展示操作的技术问题。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面对实施例中所需要使用的附图作简单的介绍,下面描述中的附图仅为本发明的部分实施例相应的附图。
图1为本发明的用户面部图像展示方法的一实施例的流程图;
图2为本发明的用户面部图像展示方法的一实施例的人脸特征点的获取过程的流程图;
图3为本发明的用户面部图像展示方法的一实施例的人脸特征点的排重处理的流程图;
图4为本发明的用户面部图像展示方法的一实施例的人脸特征框的获取过程的流程图;
图5为本发明的用户面部图像展示方法的一实施例的步骤S103的流程图;
图6为本发明的用户面部图像展示装置的一实施例的结构示意图;
图7为本发明的用户面部图像展示方法及用户面部图像展示装置的进行用户面部图像以及全景画面展示的流程图;
图8a-图8c为本发明的用户面部图像展示方法及用户面部图像展示装置的进行用户面部图像以及全景画面展示的示意图;
图9为本发明的用户面部图像展示装置所在的电子设备的工作环境结构示意图。
本发明的最佳实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的用户面部图像展示方法及展示装置用于对全景画面、特别是全景画面中的用户面部图像进行有效展示的电子设备。该电子设备包括但不限于可穿戴设备、头戴设备、医疗健康平台、个人计算机、服务器计算机、手持式或膝上型设备、移动设备(比如移动电话、个人数字助理(PDA)、媒体播放器等等)、多处理器系统、消费型电子设备、小型计算机、大型计算机、包括上述任意系统或设备的分布式计算环境,等等。该电子设备优选为接收全景画面、并通过显示屏对该全景画面进行展示的电子终端,即用户可通过固定终端或移动终端查看全景相机实时拍摄的全景画面信息,如会议场景的用户画面等。
请参照图1,图1为本发明的用户面部图像展示方法的一实施例的流程图。本实施例的用户面部图像展示方法可使用上述的电子设备进行实施,本实施例的用户面部图像展示方法包括:
步骤S101,获取当前场景的全景画面帧;并对全景画面帧进行人脸识别操作,以得到全景画面帧中各个用户的人脸特征点;
步骤S102,基于人脸特征点确定对应的人脸特征框,并根据人脸特征框确定用户的用户拍摄距离;
步骤S103,根据用户的用户面部图像的预设展示尺寸以及用户拍摄距离,对用户的用户面部图像进行渲染以及展示操作。
下面详细说明本发明的用户面部图像展示方法的各步骤的具体流程。
在步骤S101中,用户面部图像展示装置(如电子终端)获取全景相机拍摄的当前场景的全景画面帧。该全景画面帧包括以全景相机为中心360度范围内的画面信息。其中全景画面帧包括画面内容信息以及用于表示画面内容信息(如用户面部图像对应的信息)对应的位置信息的画面位置信息。即全景画面帧包括画面内容像素以及该画面内容像素对应的像素位置信息。
随后用户面部图像展示装置对获取的全景画面帧进行人脸识别操作,以得到全景画面帧中各个用户的人脸特征点。这里的人脸特征点为人的眼睛、嘴巴以及鼻子等脸部特征。该人脸特征点的获取过程请参照图2,图2为本发明的用户面部图像展示方法的一实施例的人脸特征点的获取过程的流程图。该获取过程包括:
步骤S201,用户面部图像展示装置根据画面位置信息,使用全景画面帧的一端的端画面,对全景画面帧的另一端的画面帧边缘进行画面扩充,得到扩充后的全景画面帧;
如用户位于全景画面帧的边缘,该用户面部图像可能会分别位于全景画面帧的两端的端画面,这里的端画面是指位于全景画面帧一端的画面图像,如对应全景相机0度至30度范围的画面图像以及对应全景相机330度至360度范围的画面图像。
因此在本步骤中,用户面部图像展示装置使用全景画面帧一端的端画面对全景画面帧的另一端的画面帧边缘进行画面扩充,以得到扩充后的全景画面帧。这样扩充后的全景画面帧对应的全景相机的拍摄角度为370-390度左右,其中大致有10度(如拍摄角度为370度)至30度(如拍摄角度为390度)左右重合的边缘画面内容,这样可有效的使得处于全景画面帧边缘的用户在扩充后的全景画面帧一端的端画面中进行完整的显示。
步骤S202,用户面部图像展示装置对扩充后的全景画面帧进行人脸识别操作,即识别全景画面帧中的人的眼睛、嘴巴以及鼻子等人脸特征点,从而得到扩充后的全景画面帧中的各个用户的人脸特征点以及人脸特征点对应的画面位置信息。
步骤S203,由于扩充后的全景画面帧的端画面中有可能具有重复的用户面部图像,为了避免对全景画面帧中的用户面部图像进行重复识别或无效识别,当全景画面帧中具有相同画面位置信息的人脸特征点时,用户面部图像展示装置对具有相同画面位置信息的人脸特征点进行排重处理,从而提高了端画面中的用户面部图像的准确性。
该排重处理的具体流程请参照图3,图3为本发明的用户面部图像展示方法的一实施例的人脸特征点的排重处理的流程图。这里将具有相同画面位置信息的人脸特征点分别设定为第一人脸特征点和第二人脸特征点。其中第一人脸特征点位于全景画面帧的一端的端画面中,第二人脸特征点位于全景画面帧的另一端的端画面中。该人脸特征点的排重处理的流程包括:
步骤S301,用户面部图像展示装置确定第一人脸特征点在设定范围内的相关特征点,如第一人脸特征点为人眼特征点,则该人眼特征点的相关特征点为人眼周围设定范围内的图像特征点(包括人眼特征点)。由于位于端画面中的特征点可能会出现准确度较低(如特征点模糊甚至缺失等问题),因此这里需要通过相关特征点对第一人脸特征点的准确度进行量化,如相关特征点缺失比较厉害,对应的人眼特征点的准确度也会较低。
随后用户面部图像展示装置确定与第一人脸特征点具有相同画面位置信息的第二人脸特征点在设定范围内的相关特征点。
步骤S302,如第一人脸特征点的相关特征点的权重系数大于等于第二人脸特征点的相关特征点的权重系数,则用户面部图像展示装置将第二人脸特征点及对应的相关特征点进行删除操作,否则用户面部图像展示装置将第一人脸特征点及对应的相关特征点进行删除操作。
这里第一人脸特征点的相关特征点的权重系数,由第一人脸特征点的相关特征点的类别、质量以及数量确定。第一人脸特征点的相关特征点的类别越多,如人眼特征较多、眼部皱纹特征较多以及眼部周围区域不同类别的图像特征较多,则该第一人脸特征点的相关特征点的权重系数较大。第一人脸特征点的相关特征点的特征点质量越高,即相邻特征点的关联性较大(色彩灰度变化均匀),特征点的变形较小等,则该第一人脸特征点的相关特征点的权重系数较大。第一人脸特征点的相关特征点的数量越多,即第一人脸特征点对应的显示区域越多,即对应相关特征点缺失较少,则该第一人脸特征点的相关特征点的权重系数较大。
用户面部图像展示装置基于上述第一人脸特征点的相关特征点的类别、质量以及数量,确定第一人脸特征点的相关特征点的权重系数;同时用户面部图像展示装置基于第二人脸特征点的相关特征点的类别、质量以及数量,确定第二人脸特征点的相关特征点的权重系数。从而进一步确定对第一人脸特征点及对应的相关特征点进行删除操作或对第二人脸特征点击对应的相关特征点进行删除操作。
进一步的,用户面部图像展示装置在将第一人脸特征点进行删除操作之前,还可使用第一人脸特征点的相关特征点对第二人脸特征点的相关特征点进行修正,以使得修正后的第二人脸特征点的相关特征点的权重系数最大,即使得修正后的第二人脸特征点的相关特征点的类别、质量以及数量均达到最好。
同时,用户面部图像展示装置在将第二人脸特征点进行删除操作之前,还可使用第二人脸特征点的相关特征点对第一人脸特征点的相关特征点进行修正,以使得修正后的第一人脸特征点的相关特征点的权重系数最大,即使得修正后的第一人脸特征点的相关特征点的类别、质量以及数量均达到最好。
在步骤S102中,用户面部图像展示装置基于步骤S101确定的人脸特征点,确定对应的人脸特征框。具体请参照图4,图4为本发明的用户面部图像展示方法的一实施例的人脸特征框的获取过程的流程图。该获取过程包括:
步骤S401,获取人脸特征点中的至少两个设定特征点;
本实施例中设定的人脸特征框为一方形框,这里需要获取人脸特征框的框体尺寸以及框体偏转角度,这里的框体尺寸用于表示人脸特征框的矩形框或方形框的大小,以表示用户面部的大小;框体偏转角度用于表示人脸特征框的矩形框或方形框的偏转角度,以表示用户面部的向左右的倾斜角度。
因此用户面部图像展示装置获取步骤S101中人脸特征点的至少两个设定特征点,如用户的双眼或眼睛和鼻子等特征点。
这里可以设定多个设定特征点,如眼睛、鼻子、嘴巴、耳朵、眉毛、下巴以及额头等。
由于用户可能会发生转头、抬头等动作,或者用户之间可能会产生相互遮挡,用户面部图像展示装置可同时对多个设定特征点进行检测,进而选定可以准确反馈用户面部图像位置的两个设定特征点作为后续确定人脸特征框的设定特征点。如通过检测两个设定特征点之间的距离或设定特征点的相关特征点数量来确定设定特征点的准确性。
如用户面部图像展示装置检测到了眼睛、鼻子以及嘴巴等设定特征点,如由于用户转头导致双眼距离过于接近或仅检测到了一只眼睛,则可以眼睛与鼻子作为设定特征点,或以眼睛与嘴巴作为特征点。如用户带了口罩,导致用户的鼻子以及下巴等设定特征点的相关特征点数量较少,如无法检测到下巴,或鼻子的相关特征点过少,则可将双眼、或眼睛和耳朵作为设定特征点。如用户做嘟嘴动作,导致嘴巴和鼻子之间的距离过于接近,则可以两个耳朵或双眼作为设定特征点。
步骤S402,用户面部图像展示装置基于步骤S301获取的设定特征点之间的连线的长度确定人脸特征框的框体尺寸。
当用户离全景相机越近,设定特征点之间的连线(如两眼之间的连线)就会越大,进而对应的人脸特征框的框体尺寸也会越大,即需要使用较大的人脸特征框来表征用户面部图像。
进一步的,这里还可基于设定特征点之间的连线与基准面之间的角度确定人脸特征框的框体偏转角度。
正常情况下用户双眼或用户双耳的连线是基本与水平面平行的,用户鼻子嘴巴以及下巴鼻子之间的连线是基本与水平面垂直的,耳朵与同侧眼睛的连线与水平面的夹角大致为5度至15度。这样可基于设定特征点之间的连线与基准面(水平面或垂直面)之间的角度,来确定人脸特征框的框体偏转角度。如用户双眼连线与水平面之间的夹角为20度,则人脸特征框的框体偏转角度也为20度。
步骤S403,用户面部图像展示装置根据步骤S401确定的设定特征点的位置以及步骤S402确定的人脸特征框的框体尺寸确定人脸特征框。
随后用户面部图像展示装置根据人脸特征框确定用户的用户拍摄距离,这里主要是基于人脸特征框的框体尺寸来确定用户拍摄距离,即人脸特征框的框体尺寸越大,用户拍摄距离越小;人脸特征框的框体尺寸越小,用户拍摄距离越大。
进一步的,用户面部图像展示装置还可根据步骤S402获取的框体偏转角度设定人脸特征框的角度,以使得后续可对人脸特征框内的用户面部图像进行展示角度调整,以对用户的正视面部图像进行展示。
步骤S103,由于用户面部图像展示装置可使用固定用户展示位对用户面部图像进行展示,因此需要将用户面部图像统一到预设展示尺寸,以便进行用户面部图像展示。
因此用户面部图像展示装置根据用户的用户面部图像的预设展示尺寸以及步骤S102获取的用户拍摄距离,对人脸特征框中的用户面部图像进行调整、渲染以及展示操作。具体请参照图5,图5为本发明的用户面部图像展示方法的一实施例的步骤S103的流程图。该步骤S103包括:
步骤S501,用户面部图像展示装置根据步骤S102获取的用户拍摄距离,获取用户的用户面部图像采集区域,即用户特征框对应的图像区域。
步骤S502,用户面部图像展示装置在用户面部图像采集区域中采集用户面部图像,即用户特征框对应的图像。
步骤S503,用户面部图像展示装置基于用户面部图像的预设展示尺寸,对步骤S502采集的用户面部图像进行缩放操作,这样可通过固定用户展示为对用户面部图像进行展示,并可基于框体偏转角度对用户面部图像进行角度调整。这里的固定用户展示位为设置在终端屏幕固定位置,用于对某个用户面部图像进行跟踪展示的展示位。
步骤S504,用户面部图像展示装置对步骤S503中缩放操作后的用户面部图像进行渲染以及展示操作。同时用户面部图像展示装置还可使用全景画面展示位对360度的全景画面帧或扩充后的全景画面帧进行展示操作,即使用固定用户展示位对特定用户进行展示,使用全景画面展示位对整个全景画面进行展示。
这样即完成了本实施例的用户面部图像展示方法的用户面部图像的渲染以及展示操作。
本实施例的用户面部图像展示方法基于人脸特征框来确定用户面部图像,保证对全景画面中不同距离的用户面部图像的有效识别,进而实现了用户面部图像的准确渲染及展示操作。
本发明还提供一种用户面部图像展示装置,请参照图6,图6为本发明的用户面部图像展示装置的一实施例的结构示意图。本实施例的用户面部图像展示装置可使用上述的用户面部图像展示方法进行实施,本实施例的用户面部图像展示装置60包括人脸特征点获取模块61、拍摄距离确定模块62以及面部图像渲染展示模块63。
人脸特征点获取模块61用于获取当前场景的全景画面帧;并对全景画面帧进行人脸识别操作,以得到全景画面帧中各个用户的人脸特征点;拍摄距离确定模块62用于基于人脸特征点确定对应的人脸特征框,并根据人脸特征框确定用户的用户拍摄距离;面部图像渲染展示模块63用于根据用户的用户面部图像的预设展示尺寸以及用户拍摄距离,对用户的用户面部图像进行渲染以及展示操作。
本实施例的用户面部图像展示装置60使用时,首先人脸特征点获取模块61获取全景相机拍摄的当前场景的全景画面帧。该全景画面帧包括以全景相机为中心360度范围内的画面信息。其中全景画面帧包括画面内容信息以及用于表示画面内容信息对应的位置信息的画面位置信息。
随后人脸特征点获取模块61对获取的全景画面帧进行人脸识别操作,以得到全景画面帧中各个用户的人脸特征点。这里的人脸特征点为人的眼睛、嘴巴以及鼻子等脸部特征。
然后拍摄距离确定模块62基于人脸特征点,确定对应的人脸特征框;随后拍摄距离确定模块62根据人脸特征框确定用户的用户拍摄距离,这里主要是基于人脸特征框的框体尺寸来确定用户拍摄距离,即人脸特征框的框体尺寸越大,用户拍摄距离越小;人脸特征框的框体尺寸越小,用户拍摄距离越大。
由于用户面部图像展示装置60可使用固定用户展示位对用户面部图像进行展示,因此需要将用户面部图像统一到预设展示尺寸,以便进行用户面部图像展示。
最后面部图像渲染展示模块63根据用户的用户面部图像的预设展示尺寸以及用户拍摄距离,对人脸特征框中的用户面部图像进行调整、渲染以及展示操作。同时面部图像渲染展示模块63还可使用全景画面展示位对360度的全景画面帧或扩充后的全景画面帧进行展示操作,即使用固定用户展示位对特定用户进行展示,使用全景画面展示位对整个全景画面进行展示。
这样即完成了本实施例的用户面部图像展示装置60的用户面部图像的渲染以及展示操作。
下面通过一具体实施例说明本发明的用户面部图像展示方法及用户面部图像展示装置的工作原理。请参照图7、图8a-图8c,图7为本发明的用户面部图像展示方法及用户面部图像展示装置的进行用户面部图像以及全景画面展示的流程图,图8a-图8c为本发明的用户面部图像展示方法及用户面部图像展示装置的进行用户面部图像以及全景画面展示的示意图。本实施例的用户面部图像展示装置设置具有显示屏的电子终端上,以便对全景相机实时拍摄的全景画面信息以及全景画面中的用户面部信息进行展示,因此该全景画面包括固定用户展示位以及全景画面展示位,具体如图8c所示,该电子终端展示的全景画面包括固定用户展示位801、802、803、804以及全景画面展示位805。该全景画面的展示流程包括:
步骤S701,电子终端获取当前场景的全景画面帧,该全景画面帧可如图8a所示。
步骤S702,由于全景画面帧中用户的面部可能会位于全景画面帧的边缘,因此电子终端使用全景画面帧的一端的端画面,对全景画面帧的另一端的画面帧边缘进行画面扩充,得到扩充后的全景画面帧,如图8b所示。
步骤S703,电子终端对扩充后的全景画面帧进行人脸识别操作,以得到扩充后的全景画面帧中各个用户的人脸特征点,比如图8b中的用户a、用户b、用户c以及用户d的人脸特征点,其中由于用户d处于全景画面帧的边缘,因此用户d具有处于全景画面帧的一端(图8b的右端)的端画面中的第一人脸特征点以及处于全景画面帧的另一端(图8b的左端)的端画面中的第二人脸特征点。
步骤S704,由于用户d的第一人脸特征点的权重系数大于第二人脸特征点的权重系数,因此电子终端将用户d的第一人脸特征点进行删除操作。
步骤S705,电子终端基于用户a、用户b、用户c以及用户d的人脸特征点,确定对对应人脸特征框,具体如图8b中的人脸特征框806、807、808、809。
步骤S706,电子终端根据人脸特征框的框体尺寸确定每个用户的用户拍摄距离,并根据人脸特征框的框体偏转角度对人脸特征框进行转正操作。
步骤S707,电子终端根据用户面部图像的预设展示尺寸以及用户拍摄距离,对每个用户的用户面部图像进行缩放操作,以使得缩放操作后的用户面部图像符合预设展示尺寸。
步骤S708,电子终端使用固定用户展示位801、802、803、804对缩放操作后的用户面部图像进行展示,且每个用户的用户面部图像的尺寸均为预设展示尺寸;同时电子终端使用全景画面展示位805对360度的全景画面帧或扩充后的全景画面帧进行展示,以便查看用户a、用户b、用户c以及用户d在全景画面帧中的具体位置。具体如图8c所示。
这样即完成了本实施例的全景相机实时拍摄的全景画面信息以及全景画面中的用户面部信息的展示过程,当全景画面帧刷新时,电子终端会对全景画面信息以及对应的用户面部信息进行刷新操作,从而实现了持续对不同距离的用户面部图像的有效识别。
本发明的用户面部图像展示方法及展示装置基于人脸特征框来确定用户面部图像,保证对全景画面中不同距离的用户面部图像的有效识别,进而实现了用户面部图像的准确渲染及展示操作;有效解决了现有的用户面部图像展示方法及展示装置无法对用户面部画面进行准确的渲染以及展示操作的技术问题。
如本申请所使用的术语“组件”、“模块”、“系统”、“接口”、“进程”等等一般地旨在指计算机相关实体:硬件、硬件和软件的组合、软件或执行中的软件。例如,组件可以是但不限于是运行在处理器上的进程、处理器、对象、可执行应用、执行的线程、程序和/或计算机。通过图示,运行在控制器上的应用和该控制器二者都可以是组件。一个或多个组件可以有在于执行的进程和/或线程内,并且组件可以位于一个计算机上和/或分布在两个或更多计算机之间。
图9和随后的讨论提供了对实现本发明所述的用户面部图像展示装置所在的电子设备的工作环境的简短、概括的描述。图9的工作环境仅仅是适当的工作环境的一个实例并且不旨在建议关于工作环境的用途或功能的范围的任何限制。实例电子设备912包括但不限于可穿戴设备、头戴设备、医疗健康平台、个人计算机、服务器计算机、手持式或膝上型设备、移动设备(比如移动电话、个人数字助理(PDA)、媒体播放器等等)、多处理器系统、消费型电子设备、小型计算机、大型计算机、包括上述任意系统或设备的分布式计算环境,等等。
尽管没有要求,但是在“计算机可读指令”被一个或多个电子设备执行的通用背景下描述实施例。计算机可读指令可以经由计算机可读介质来分布(下文讨论)。计算机可读指令可以实现为程序模块,比如执行特定任务或实现特定抽象数据类型的功能、对象、应用编程接口(API)、数据结构等等。典型地,该计算机可读指令的功能可以在各种环境中随意组合或分布。
图9图示了包括本发明的用户面部图像展示装置中的一个或多个实施例的电子设备912的实例。在一种配置中,电子设备912包括至少一个处理单元916和存储器918。根据电子设备的确切配置和类型,存储器918可以是易失性的(比如RAM)、非易失性的(比如ROM、闪存等)或二者的某种组合。该配置在图9中由虚线914图示。
在其他实施例中,电子设备912可以包括附加特征和/或功能。例如,设备912还可以包括附加的存储装置(例如可移除和/或不可移除的),其包括但不限于磁存储装置、光存储装置等等。这种附加存储装置在图9中由存储装置920图示。在一个实施例中,用于实现本文所提供的一个或多个实施例的计算机可读指令可以在存储装置920中。存储装置920还可以存储用于实现操作系统、应用程序等的其他计算机可读指令。计算机可读指令可以载入存储器918中由例如处理单元916执行。
本文所使用的术语“计算机可读介质”包括计算机存储介质。计算机存储介质包括以用于存储诸如计算机可读指令或其他数据之类的信息的任何方法或技术实现的易失性和非易失性、可移除和不可移除介质。存储器918和存储装置920是计算机存储介质的实例。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字通用盘(DVD)或其他光存储装置、盒式磁带、磁带、磁盘存储装置或其他磁存储设备、或可以用于存储期望信息并可以被电子设备912访问的任何其他介质。任意这样的计算机存储介质可以是电子设备912的一部分。
电子设备912还可以包括允许电子设备912与其他设备通信的通信连接926。通信连接926可以包括但不限于调制解调器、网络接口卡(NIC)、集成网络接口、射频发射器/接收器、红外端口、USB连接或用于将电子设备912连接到其他电子设备的其他接口。通信连接926可以包括有线连接或无线连接。通信连接926可以发射和/或接收通信媒体。
术语“计算机可读介质”可以包括通信介质。通信介质典型地包含计算机可读指令或诸如载波或其他传输机构之类的“己调制数据信号”中的其他数据,并且包括任何信息递送介质。术语“己调制数据信号”可以包括这样的信号:该信号特性中的一个或多个按照将信息编码到信号中的方式来设置或改变。
电子设备912可以包括输入设备924,比如键盘、鼠标、笔、语音输入设备、触摸输入设备、红外相机、视频输入设备和/或任何其他输入设备。设备912中也可以包括输出设备922,比如一个或多个显示器、扬声器、打印机和/或任意其他输出设备。输入设备924和输出设备922可以经由有线连接、无线连接或其任意组合连接到电子设备912。在一个实施例中,来自另一个电子设备的输入设备或输出设备可以被用作电子设备912的输入设备924或输出设备922。
电子设备912的组件可以通过各种互连(比如总线)连接。这样的互连可以包括外围组件互连(PCI)(比如快速PCI)、通用串行总线(USB)、火线(IEEE 1394)、光学总线结构等等。在另一个实施例中,电子设备912的组件可以通过网络互连。例如,存储器918可以由位于不同物理位置中的、通过网络互连的多个物理存储器单元构成。
本领域技术人员将认识到,用于存储计算机可读指令的存储设备可以跨越网络分布。例如,可经由网络928访问的电子设备930可以存储用于实现本发明所提供的一个或多个实施例的计算机可读指令。电子设备912可以访问电子设备930并且下载计算机可读指令的一部分或所有以供执行。可替代地,电子设备912可以按需要下载多条计算机可读指令,或者一些指令可以在电子设备912处执行并且一些指令可以在电子设备930处执行。
本文提供了实施例的各种操作。在一个实施例中,所述的一个或多个操作可以构成一个或多个计算机可读介质上存储的计算机可读指令,其在被电子设备执行时将使得计算设备执行所述操作。描述一些或所有操作的顺序不应当被解释为暗示这些操作必需是顺序相关的。本领域技术人员将理解具有本说明书的益处的可替代的排序。而且,应当理解,不是所有操作必需在本文所提供的每个实施例中存在。
而且,尽管已经相对于一个或多个实现方式示出并描述了本公开,但是本领域技术人员基于对本说明书和附图的阅读和理解将会想到等价变型和修改。本公开包括所有这样的修改和变型,并且仅由所附权利要求的范围限制。特别地关于由上述组件(例如元件、资源等)执行的各种功能,用于描述这样的组件的术语旨在对应于执行所述组件的指定功能(例如其在功能上是等价的)的任意组件(除非另外指示),即使在结构上与执行本文所示的本公开的示范性实现方式中的功能的公开结构不等同。此外,尽管本公开的特定特征已经相对于若干实现方式中的仅一个被公开,但是这种特征可以与如可以对给定或特定应用而言是期望和有利的其他实现方式的一个或多个其他特征组合。而且,就术语“包括”、“具有”、“含有”或其变形被用在具体实施方式或权利要求中而言,这样的术语旨在以与术语“包含”相似的方式包括。
本发明实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。上述的各装置或系统,可以执行相应方法实施例中的方法。
综上所述,虽然本发明已以实施例揭露如上,实施例前的序号仅为描述方便而使用,对本发明各实施例的顺序不造成限制。并且,上述实施例并非用以限制本发明,本领域的普通技术人员,在不脱离本发明的精神和范围内,均可作各种更动与润饰,因此本发明的保护范围以权利要求界定的范围为准。

Claims (15)

  1. 一种用户面部图像展示方法,其包括:
    获取当前场景的全景画面帧;并对所述全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点;
    基于所述人脸特征点确定对应的人脸特征框,并根据人脸特征框确定所述用户的用户拍摄距离;以及
    根据所述用户的用户面部图像的预设展示尺寸以及所述用户拍摄距离,对所述用户的用户面部图像进行渲染以及展示操作。
  2. 根据权利要求1所述的用户面部图像展示方法,其中所述全景画面帧包括画面位置信息;
    所述对所述全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点的步骤包括:
    根据所述画面位置信息,使用所述全景画面帧的一端的端画面,对所述全景画面帧的另一端的画面帧边缘进行画面扩充,得到扩充后的全景画面帧;
    对扩充后的全景画面帧进行人脸识别操作,以得到扩充后的全景画面帧中各个用户的人脸特征点;
    当具有相同画面位置信息的人脸特征点时,对具有相同画面位置信息的人脸特征点进行排重处理。
  3. 根据权利要求2所述的用户面部图像展示方法,其中所述对具有相同画面位置信息的人脸特征点进行排重处理的步骤包括:
    确定第一人脸特征点在设定范围内的相关特征点,并确定与所述第一人脸特征点具有相同画面位置信息的第二人脸特征点在设定范围内的相关特征点;
    如第一人脸特征点的相关特征点的权重系数大于等于第二人脸特征点的相关特征点的权重系数,则将所述第二人脸特征点进行删除操作;否则对所述第一人脸特征点进行删除操作。
  4. 根据权利要求3所述的用户面部图像展示方法,其中
    根据所述第一人脸特征点的相关特征点的类别、质量以及数量,确定所述第一人脸特征点的相关特征点的权重系数;
    根据所述第二人脸特征点的相关特征点的类别、质量以及数量,确定所述第二人脸特征点的相关特征点的权重系数。
  5. 根据权利要求3所述的用户面部图像展示方法,其中
    在将所述第二人脸特征点进行删除操作之前还包括步骤:使用所述第二人脸特征点的相关特征点对所述第一人脸特征点的相关特征点进行修正,以使得修正后的第一人脸特征点的相关特征点的权重系数最大;
    在将所述第一人脸特征点进行删除操作之前还包括步骤:使用所述第一人脸特征点的相关特征点对所述第二人脸特征点的相关特征点进行修正,以使得修正后的第二人脸特征点的相关特征点的权重系数最大。
  6. 根据权利要求1所述的用户面部图像展示方法,其中所述基于所述人脸特征点确定对应的人脸特征框的步骤包括:
    获取所述人脸特征点中的至少两个设定特征点;
    基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸;
    根据所述设定特征点的位置以及所述人脸特征框的框体尺寸确定所述人脸特征框。
  7. 根据权利要求1所述的用户面部图像展示方法,其中所述基于所述人脸特征点确定对应的人脸特征框的步骤包括:
    获取所述人脸特征点中的至少两个设定特征点;
    基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸,基于所述设定特征点之间连线的与基准面之间的角度确定所述人脸特征框的框体偏转角度;
    根据所述设定特征点的位置、所述人脸特征框的框体尺寸、以及所述人脸特征框的框体偏转角度确定所述人脸特征框。
  8. 根据权利要求1所述的用户面部图像展示方法,其中所述根据所述用户的用户面部图像的预设展示尺寸以及所述用户拍摄距离,对所述用户的用户面部图像进行渲染以及展示操作的步骤包括:
    根据所述用户拍摄距离,获取所述用户的用户面部图像采集区域;
    在所述用户面部图像采集区域中采集用户面部图像;
    基于用户面部图像的预设展示尺寸,对所述用户面部图像进行缩放操作,以便通过固定用户展示位对所述用户面部图像进行展示;
    对缩放操作后的用户面部图像进行渲染以及展示操作。
  9. 根据权利要求2所述的用户面部图像展示方法,其中所述用户面部图像展示方法还包括:
    使用全景画面展示位对扩充后的全景画面帧进行展示操作。
  10. 一种用户面部图像展示装置,用于对全景画面进行展示操作,其中所述用户面部图像展示装置包括:
    人脸特征点获取模块,用于获取当前场景的全景画面帧;并对所述全景画面帧进行人脸识别操作,以得到所述全景画面帧中各个用户的人脸特征点;
    拍摄距离确定模块,用于基于所述人脸特征点确定对应的人脸特征框,并根据人脸特征框确定所述用户的用户拍摄距离;以及
    面部图像渲染展示模块,用于根据所述用户的用户面部图像的预设展示尺寸以及所述用户拍摄距离,对所述用户的用户面部图像进行渲染以及展示操作。
  11. 根据权利要求10所述的用户面部图像展示装置,其中所述全景画面帧包括画面位置信息;
    所述人脸特征点获取模块用于根据所述画面位置信息,使用所述全景画面帧的一端的端画面,对所述全景画面帧的另一端的画面帧边缘进行画面扩充,得到扩充后的全景画面帧;
    对扩充后的全景画面帧进行人脸识别操作,以得到扩充后的全景画面帧中各个用户的人脸特征点;
    当具有相同画面位置信息的人脸特征点时,对具有相同画面位置信息的人脸特征点进行排重处理。
  12. 根据权利要求10所述的用户面部图像展示装置,其中所述拍摄距离确定模块用于获取所述人脸特征点中的至少两个设定特征点;
    基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸;
    根据所述设定特征点的位置以及所述人脸特征框的框体尺寸确定所述人脸特征框。
  13. 根据权利要求10所述的用户面部图像展示装置,其中所述拍摄距离确定模块用于获取所述人脸特征点中的至少两个设定特征点;
    基于所述设定特征点之间连线的长度确定所述人脸特征框的框体尺寸,基于所述设定特征点之间连线的与基准面之间的角度确定所述人脸特征框的框体偏转角度;
    根据所述设定特征点的位置、所述人脸特征框的框体尺寸、以及所述人脸特征框的框体偏转角度确定所述人脸特征框。
  14. 根据权利要求10所述的用户面部图像展示装置,其中所述面部图像渲染展示模块用于根据所述用户拍摄距离,获取所述用户的用户面部图像采集区域;
    在所述用户面部图像采集区域中采集用户面部图像;
    基于用户面部图像的预设展示尺寸,对所述用户面部图像进行缩放操作,以便通过固定用户展示位对所述用户面部图像进行展示;
    对缩放操作后的用户面部图像进行渲染以及展示操作。
  15. 一种计算机可读存储介质,其内存储有处理器可执行指令,所述指令由一个或一个以上处理器加载,以执行上述权利要求1中的用户面部图像展示方法。
PCT/CN2021/078312 2020-03-13 2021-02-27 用户面部图像展示方法、展示装置及对应的存储介质 WO2021179923A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010178024.1A CN111402391B (zh) 2020-03-13 2020-03-13 用户面部图像展示方法、展示装置及对应的存储介质
CN202010178024.1 2020-03-13

Publications (1)

Publication Number Publication Date
WO2021179923A1 true WO2021179923A1 (zh) 2021-09-16

Family

ID=71432509

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078312 WO2021179923A1 (zh) 2020-03-13 2021-02-27 用户面部图像展示方法、展示装置及对应的存储介质

Country Status (2)

Country Link
CN (1) CN111402391B (zh)
WO (1) WO2021179923A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402391B (zh) * 2020-03-13 2023-09-01 深圳看到科技有限公司 用户面部图像展示方法、展示装置及对应的存储介质
CN114743252B (zh) * 2022-06-10 2022-09-16 中汽研汽车检验中心(天津)有限公司 用于头部模型的特征点筛选方法、设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8345921B1 (en) * 2009-03-10 2013-01-01 Google Inc. Object detection with false positive filtering
CN103176693A (zh) * 2013-02-18 2013-06-26 联宝(合肥)电子科技有限公司 一种自动变换终端屏幕画面方向的方法及装置
CN105139340A (zh) * 2015-09-15 2015-12-09 广东欧珀移动通信有限公司 一种全景照片的拼接方法及装置
CN109068060A (zh) * 2018-09-05 2018-12-21 Oppo广东移动通信有限公司 图像处理方法和装置、终端设备、计算机可读存储介质
US20180376121A1 (en) * 2017-06-22 2018-12-27 Acer Incorporated Method and electronic device for displaying panoramic image
CN110659623A (zh) * 2019-09-27 2020-01-07 深圳看到科技有限公司 基于分帧处理的全景画面展示方法、装置及存储介质
CN110673811A (zh) * 2019-09-27 2020-01-10 深圳看到科技有限公司 基于声音信息定位的全景画面展示方法、装置及存储介质
CN111402391A (zh) * 2020-03-13 2020-07-10 深圳看到科技有限公司 用户面部图像展示方法、展示装置及对应的存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8345921B1 (en) * 2009-03-10 2013-01-01 Google Inc. Object detection with false positive filtering
CN103176693A (zh) * 2013-02-18 2013-06-26 联宝(合肥)电子科技有限公司 一种自动变换终端屏幕画面方向的方法及装置
CN105139340A (zh) * 2015-09-15 2015-12-09 广东欧珀移动通信有限公司 一种全景照片的拼接方法及装置
US20180376121A1 (en) * 2017-06-22 2018-12-27 Acer Incorporated Method and electronic device for displaying panoramic image
CN109068060A (zh) * 2018-09-05 2018-12-21 Oppo广东移动通信有限公司 图像处理方法和装置、终端设备、计算机可读存储介质
CN110659623A (zh) * 2019-09-27 2020-01-07 深圳看到科技有限公司 基于分帧处理的全景画面展示方法、装置及存储介质
CN110673811A (zh) * 2019-09-27 2020-01-10 深圳看到科技有限公司 基于声音信息定位的全景画面展示方法、装置及存储介质
CN111402391A (zh) * 2020-03-13 2020-07-10 深圳看到科技有限公司 用户面部图像展示方法、展示装置及对应的存储介质

Also Published As

Publication number Publication date
CN111402391A (zh) 2020-07-10
CN111402391B (zh) 2023-09-01

Similar Documents

Publication Publication Date Title
US8203595B2 (en) Method and apparatus for enabling improved eye contact in video teleconferencing applications
TW201901527A (zh) 視訊會議裝置與視訊會議管理方法
WO2021114990A1 (zh) 人脸畸变校正方法、装置、电子设备及存储介质
EP3506167B1 (en) Processing method and mobile device
WO2021027585A1 (zh) 人脸图像处理方法及电子设备
WO2021179923A1 (zh) 用户面部图像展示方法、展示装置及对应的存储介质
CN109561257B (zh) 画面对焦方法、装置、终端及对应的存储介质
WO2021093689A1 (zh) 面部图像变形方法、装置、电子设备和计算机可读介质
CN110457963B (zh) 显示控制方法、装置、移动终端及计算机可读存储介质
US10929982B2 (en) Face pose correction based on depth information
US20210335391A1 (en) Resource display method, device, apparatus, and storage medium
WO2021170123A1 (zh) 视频生成方法、装置及对应的存储介质
WO2020248950A1 (zh) 一种面部特征的有效性判定方法及电子设备
CN115205925A (zh) 表情系数确定方法、装置、电子设备及存储介质
CN110673811B (zh) 基于声音信息定位的全景画面展示方法、装置及存储介质
US20220139016A1 (en) Sticker generating method and apparatus, and medium and electronic device
KR20210049649A (ko) 얼굴 이미지를 강화하는 방법 및 장치, 전자 기기
WO2020135577A1 (zh) 画面生成方法、装置、终端及对应的存储介质
CN112714337A (zh) 视频处理方法、装置、电子设备和存储介质
WO2021109863A1 (zh) 照片处理方法及照片处理装置
WO2022027191A1 (zh) 平面矫正方法及装置、计算机可读介质和电子设备
US10902265B2 (en) Imaging effect based on object depth information
CN110659623B (zh) 基于分帧处理的全景画面展示方法、装置及存储介质
CN107872619B (zh) 一种拍照处理方法、装置及设备
US20240098359A1 (en) Gesture control during video capture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21767354

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21767354

Country of ref document: EP

Kind code of ref document: A1