CN110611732B - Window control method and related product - Google Patents

Window control method and related product Download PDF

Info

Publication number
CN110611732B
CN110611732B CN201810623736.2A CN201810623736A CN110611732B CN 110611732 B CN110611732 B CN 110611732B CN 201810623736 A CN201810623736 A CN 201810623736A CN 110611732 B CN110611732 B CN 110611732B
Authority
CN
China
Prior art keywords
target
preview window
expression
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810623736.2A
Other languages
Chinese (zh)
Other versions
CN110611732A (en
Inventor
黄旭
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810623736.2A priority Critical patent/CN110611732B/en
Publication of CN110611732A publication Critical patent/CN110611732A/en
Application granted granted Critical
Publication of CN110611732B publication Critical patent/CN110611732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the application discloses a window control method and a related product, wherein the method comprises the following steps: the electronic equipment displays a preview window, the preview window is a plurality of preview windows, each preview window is used for displaying facial images of a single user or a plurality of users, facial data of the users are collected through a camera, the facial images of the users are displayed in the target preview window according to the facial data, the target preview window is at least one preview window in the preview windows, sticker setting information of the target preview window is obtained, corresponding stickers are displayed in the target preview window according to the sticker setting information, the expression of the users is identified according to the facial images in the target preview window, whether the expression is a preset expression is judged, and when the expression is the preset expression, the image of the target preview window is fixed. The embodiment of the application is beneficial to improving the accuracy and intelligence of window control of the electronic equipment.

Description

Window control method and related product
Technical Field
The application relates to the technical field of shooting, in particular to a window control method and a related product.
Background
With the rapid development of related technologies of smart phones, more and more applications are installed in user mobile phones, such as reading applications, payment applications, game applications, music applications, and the like, and people's clothes and eating habits are inseparable from mobile phones. When the application of the smart phone is updated and other operations, files which are not needed to be deleted by a user are deleted by mistake, so that file loss is caused, and poor user experience is caused.
Nowadays, users increasingly shoot images through electronic devices, and functions of applications for shooting images are also increasingly diversified, for example, special effects are added to the shot images, stickers are added, and the like, but existing sticker technologies still need to be improved on personalization and intellectualization.
Disclosure of Invention
The embodiment of the application provides a window control method and a related product, which can improve the efficiency and accuracy of restoring files of electronic equipment.
In a first aspect, an embodiment of the present application provides a window control method, where the method includes:
displaying a preview window, wherein the preview window is a plurality of preview windows, and each preview window is used for displaying facial images of a single user or a plurality of users;
acquiring face data of a user through a camera, and displaying a face image of the user in a target preview window according to the face data, wherein the target preview window is at least one preview window in the plurality of preview windows;
acquiring the sticker setting information of the target preview window;
displaying a corresponding paster in the target preview window according to the paster setting information;
recognizing the expression of a user according to the facial image in the target preview window;
judging whether the expression is a preset expression or not;
and when the expression is a preset expression, freezing the image of the target preview window.
In a second aspect, an embodiment of the present application provides a window control apparatus, where the window control apparatus includes a display unit, a collection unit, an acquisition unit, an identification unit, and a freezing unit,
the display unit is used for displaying a preview window, the preview window is a plurality of preview windows, and each preview window is used for displaying facial images of a single user or a plurality of users;
the acquisition unit is used for acquiring the face data of a user through a camera and displaying a face image of the user in a target preview window according to the face data, wherein the target preview window is at least one preview window in the plurality of preview windows;
the acquisition unit is used for acquiring the sticker setting information of the target preview window;
the display unit is also used for displaying the corresponding paster in the target preview window according to the paster setting information;
the identification unit is also used for judging whether the expression is a preset expression or not;
and the freezing unit is used for freezing the image of the target preview window when the expression is a preset expression.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application, and the computer includes an electronic device.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package, the computer comprising an electronic device.
It can be seen that in the embodiment of this application, electronic equipment at first shows the preview window, gathers user's facial data through the camera, according to facial data shows user's facial image in the target preview window, secondly, acquires the sticker setting information of target preview window, once more, according to sticker setting information is in show corresponding sticker in the target preview window, once more, according to in the target preview window facial image discernment user's expression, finally, judge whether the expression is for predetermineeing the expression, when the expression is for predetermineeing the expression, the stop motion the image of target preview window. Therefore, the electronic equipment can freeze the dynamic image of the target preview window in real time according to the expression of the user in the scene of setting the sticker according to the facial image so as to obtain the image meeting the requirements of the user, the user does not need to click the photographing button and other operations, the influence on the real-time expression of the user due to extra operations is avoided, the photographing quality is further influenced, and the convenience and the accuracy of generating the sticker photo by the electronic equipment are improved.
Drawings
Reference will now be made in brief to the accompanying drawings, to which embodiments of the present application relate.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2A is a flowchart illustrating a window control method according to an embodiment of the present disclosure;
FIG. 2B is a schematic diagram of a preview window in a window control method according to an embodiment of the present disclosure;
FIG. 2C is a schematic diagram of a preview window in a window control method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a window control method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a window control method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
FIG. 6 is a block diagram of functional units of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, which have wireless communication functions, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices. The operating system related to the embodiment of the invention is a software system which performs unified management on hardware resources and provides a service interface for a user.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure, where the electronic device 100 includes: the display device comprises a shell 110, a circuit board 120 arranged in the shell 110 and a display screen 130 arranged on the shell 110, wherein a processor 121 is arranged on the circuit board 120, and the processor 121 is connected with the display screen 130.
The following describes embodiments of the present application in detail.
Referring to fig. 2A, fig. 2A is a schematic flowchart of a window control method according to an embodiment of the present application, and as shown in fig. 2A, the window control method includes:
s201, the electronic equipment displays a preview window, wherein the preview window is a plurality of preview windows, and each preview window is used for displaying facial images of a single user or a plurality of users.
S202, the electronic equipment collects face data of a user through a camera, and displays a face image of the user in a target preview window according to the face data, wherein the target preview window is at least one preview window in the plurality of preview windows.
S203, the electronic equipment acquires the sticker setting information of the target preview window.
The sticker setting information may be preset when the electronic device leaves a factory or acquired from a third-party application, and is not limited uniquely here.
S204, the electronic equipment displays the corresponding paster in the target preview window according to the paster setting information.
S205, the electronic equipment identifies the expression of the user according to the facial image in the target preview window.
S206, the electronic equipment judges whether the expression is a preset expression or not.
And S207, when the expression is a preset expression, the electronic equipment freezes the image of the target preview window.
The preset expression may include, but is not limited to, smile, laugh, mouth-blewing, eyebrow-frowning, etc., and is not limited thereto.
It can be seen that in the embodiment of this application, electronic equipment at first shows the preview window, gathers user's facial data through the camera, according to facial data shows user's facial image in the target preview window, secondly, acquires the sticker setting information of target preview window, once more, according to sticker setting information is in show corresponding sticker in the target preview window, once more, according to in the target preview window facial image discernment user's expression, finally, judge whether the expression is for predetermineeing the expression, when the expression is for predetermineeing the expression, the stop motion the image of target preview window. Therefore, the electronic equipment can freeze the dynamic image of the target preview window in real time according to the expression of the user in the scene of setting the sticker according to the facial image so as to obtain the image meeting the requirements of the user, the user does not need to click the photographing button and other operations, the influence on the real-time expression of the user due to extra operations is avoided, the photographing quality is further influenced, and the convenience and the accuracy of generating the sticker photo by the electronic equipment are improved.
In one possible example, the displaying a corresponding sticker in the target preview window according to the sticker setting information includes: the electronic equipment determines a target paster according to the paster setting information; caching the target paster in the texture corresponding to the current target preview window, wherein the texture is a container for caching image data; acquiring a label of a display area of the current preview window, wherein the label of the display area is used for identifying the display area of the current preview window; copying the image data in the texture in a target texture area according to the mark of the display area so that the target sticker is displayed in the display area of the current preview window, wherein the target texture area corresponds to the display area of the current preset window.
The Texture is a container for storing image data, which may be a buffer, and opengles may calculate and change the color of each pixel of the image data of the container through a shader and output the data to a screen. When the Shader OpenGL ES 2.0API is developed, Shader code is written to complete some vertex transformation and texture color calculation. OpenGL (Open Graphics Library) is a specification that defines a cross-programming language, cross-platform programming interface for three-dimensional images (two-dimensional is also possible). OpenGL is a professional graphical program interface, and is a bottom-layer graphics library which is powerful in function and convenient to call.
Therefore, in the example, the electronic equipment can obtain different effects of seeing the target sticker through the target preview window in real time through the real-time preview of the target sticker, so that the user experience is improved, and the intelligence of window control of the electronic equipment is facilitated.
In one possible example, the recognizing the expression of the user according to the facial image in the target preview window includes: the electronic equipment matches the facial image in the target preview window with a plurality of preset image templates in a preset image template library to obtain a plurality of matching values; and determining the target expression corresponding to the preset image template with the maximum matching value as the expression of the user according to the corresponding relation between the preset image template and the expression.
The preset image template and the expression may be in one-to-one, one-to-many, or many-to-many correspondence, which is not limited herein.
For example, the electronic device displays a four-pane preview window, collects a facial image of the user as shown in fig. 2B, performs expression matching through a preset image template, determines that the expression of the user is smile, where the smile freeze window is the last one, and selects a fourth sticker through a touch instruction of the user to obtain the freeze window shown in fig. 2C.
Therefore, in this example, the electronic device can accurately determine the target expression capable of triggering the stop motion through the corresponding relationship between the preset image template and the expression, and stop the facial image of the user in the corresponding preview window, so as to avoid the situation of mistakenly recognizing the expression, and improve the accuracy and intelligence of recognizing the expression of the electronic device in window control.
In one possible example, the obtaining of the sticker setting information of the target preview window includes: the electronic equipment acquires a target gesture of a user according to the camera; determining the paster setting information of the target preview window according to the target gesture; or determining the sticker setting information of the target preview window according to a touch instruction of a user.
The target gesture is gesture information preset by the electronic equipment, for example, a love preview window is displayed when the user is happy.
Therefore, in the example, the electronic equipment accurately acquires different sticker setting information through different triggering operations, so that the interestingness of the user in selecting the stickers is improved, and the intelligence and the accuracy of window preview of the electronic equipment are facilitated.
In one possible example, after the freezing the image of the target preview window when the expression is a preset expression, the method further includes: acquiring a shooting instruction; and shooting the frozen target preview window according to the shooting instruction to generate a picture.
Optionally, after images of all the target preview windows in the plurality of preview windows are frozen, the shooting instruction is obtained, and the shooting operation is performed on the frozen target preview windows according to the shooting instruction, so that a picture is generated.
And reading the last texture pixel when the photographing is carried out, and storing the texture pixel as a picture in a Joint Photographic Experts Group (JPEG) format. The JPEG format is a format which can compress files to the minimum, and when the files are stored in the JPEG format in Photoshop software, 11 levels of compression are provided, and the files are represented by 0-10 levels. The compression ratio of 0 grade is the highest, and the image quality is the worst. Even when the 10-grade quality preservation with almost no loss of details is adopted, the compression ratio can reach 5: 1. when the image file is stored in a BMP format, a 4.28MB image file is obtained, when the image file is stored in a JPEG format, the file is only 178KB, and the compression ratio reaches 24: 1. after multiple comparisons, the 8 th level compression is adopted as the optimal proportion of both the storage space and the image quality.
Therefore, in the example, the electronic equipment can shoot according to the target preview window after the freeze frame, and the intelligence of window control of the electronic equipment is improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of a window control method according to an embodiment of the present application, where as shown in the figure, the window control method includes:
s301, the electronic equipment displays a preview window.
S302, the electronic equipment collects face data of a user through a camera and displays a face image of the user in a target preview window according to the face data.
S303, the electronic equipment acquires the sticker setting information of the target preview window.
S304, the electronic equipment determines the target paster according to the paster setting information.
S305, caching the target paster in the texture corresponding to the current target preview window by the electronic equipment, wherein the texture is a container for caching image data.
S306, the electronic equipment obtains the label of the display area of the current preview window, and the label of the display area is used for identifying the display area of the current preview window.
S307, copying the image data in the texture in the target texture area by the electronic equipment according to the mark of the display area, so that the target sticker is displayed in the display area of the current preview window.
S308, the electronic equipment identifies the expression of the user according to the facial image in the target preview window.
S309, the electronic equipment judges whether the expression is a preset expression or not.
S310, when the expression is a preset expression, the electronic equipment freezes the image of the target preview window.
It can be seen that in the embodiment of this application, electronic equipment at first shows the preview window, gathers user's facial data through the camera, according to facial data shows user's facial image in the target preview window, secondly, acquires the sticker setting information of target preview window, once more, according to sticker setting information is in show corresponding sticker in the target preview window, once more, according to in the target preview window facial image discernment user's expression, finally, judge whether the expression is for predetermineeing the expression, when the expression is for predetermineeing the expression, the stop motion the image of target preview window. Therefore, the electronic equipment can freeze the dynamic image of the target preview window in real time according to the expression of the user in the scene of setting the sticker according to the facial image so as to obtain the image meeting the requirements of the user, the user does not need to click the photographing button and other operations, the influence on the real-time expression of the user due to extra operations is avoided, the photographing quality is further influenced, and the convenience and the accuracy of generating the sticker photo by the electronic equipment are improved.
In addition, the electronic equipment can obtain different effects of seeing the target sticker through the target preview window in real time through real-time preview of the target sticker, so that the user experience is improved, and the intellectuality of window control of the electronic equipment is facilitated.
Referring to fig. 4, please refer to fig. 4 in accordance with the embodiment shown in fig. 2A, where fig. 4 is a flowchart illustrating a window control method according to an embodiment of the present disclosure. As shown in the figure, the window control method comprises the following steps:
s401, the electronic equipment displays a preview window.
S402, the electronic equipment collects face data of a user through a camera and displays a face image of the user in a target preview window according to the face data.
And S403, the electronic equipment determines the sticker setting information of the target preview window according to the touch instruction of the user.
S404, the electronic equipment determines the target paster according to the paster setting information.
S405, caching the target paster in the texture corresponding to the current target preview window by the electronic equipment, wherein the texture is a container for caching image data.
S406, the electronic equipment acquires the label of the display area of the current preview window, wherein the label of the display area is used for identifying the display area of the current preview window.
S407, copying the image data in the texture in the target texture area by the electronic equipment according to the mark of the display area, so that the target sticker is displayed in the display area of the current preview window.
S408, the electronic equipment matches the face image in the target preview window with a plurality of preset image templates in a preset image template library to obtain a plurality of matching values.
And S409, the electronic equipment determines the target expression corresponding to the preset image template with the maximum matching value as the expression of the user according to the corresponding relation between the preset image template and the expression.
S410, the electronic equipment judges whether the expression is a preset expression or not.
S411, when the expression is a preset expression, the electronic equipment freezes the image of the target preview window.
It can be seen that in the embodiment of this application, electronic equipment at first shows the preview window, gathers user's facial data through the camera, according to facial data shows user's facial image in the target preview window, secondly, acquires the sticker setting information of target preview window, once more, according to sticker setting information is in show corresponding sticker in the target preview window, once more, according to in the target preview window facial image discernment user's expression, finally, judge whether the expression is for predetermineeing the expression, when the expression is for predetermineeing the expression, the stop motion the image of target preview window. Therefore, the electronic equipment can freeze the dynamic image of the target preview window in real time according to the expression of the user in the scene of setting the sticker according to the facial image so as to obtain the image meeting the requirements of the user, the user does not need to click the photographing button and other operations, the influence on the real-time expression of the user due to extra operations is avoided, the photographing quality is further influenced, and the convenience and the accuracy of generating the sticker photo by the electronic equipment are improved.
In addition, the electronic equipment can see different effects through the target preview window in real time through the real-time preview of the target sticker, so that the user experience is improved, and the intelligence and the interestingness of window control of the electronic equipment are facilitated.
In addition, the electronic equipment can accurately determine the target expression capable of triggering the stop motion through the corresponding relation between the preset image template and the expression, and the facial image of the user is stopped motion in the corresponding preview window, so that the situation of mistakenly recognizing the expression is avoided, and the accuracy and the intelligence of recognizing the expression by the electronic equipment in window control are improved.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In accordance with the embodiments shown in fig. 2A, fig. 3, and fig. 4, please refer to fig. 5, and fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are different from the one or more application programs, and the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps;
displaying a preview window, wherein the preview window is a plurality of preview windows, and each preview window is used for displaying facial images of a single user or a plurality of users;
acquiring face data of a user through a camera, and displaying a face image of the user in a target preview window according to the face data, wherein the target preview window is at least one preview window in the plurality of preview windows;
acquiring the sticker setting information of the target preview window;
displaying a corresponding paster in the target preview window according to the paster setting information;
recognizing the expression of a user according to the facial image in the target preview window;
judging whether the expression is a preset expression or not;
and when the expression is a preset expression, freezing the image of the target preview window.
It can be seen that in the embodiment of this application, electronic equipment at first shows the preview window, gathers user's facial data through the camera, according to facial data shows user's facial image in the target preview window, secondly, acquires the sticker setting information of target preview window, once more, according to sticker setting information is in show corresponding sticker in the target preview window, once more, according to in the target preview window facial image discernment user's expression, finally, judge whether the expression is for predetermineeing the expression, when the expression is for predetermineeing the expression, the stop motion the image of target preview window. Therefore, the electronic equipment can freeze the dynamic image of the target preview window in real time according to the expression of the user in the scene of setting the sticker according to the facial image so as to obtain the image meeting the requirements of the user, the user does not need to click the photographing button and other operations, the influence on the real-time expression of the user due to extra operations is avoided, the photographing quality is further influenced, and the convenience and the accuracy of generating the sticker photo by the electronic equipment are improved.
In one possible example, in the aspect of displaying the corresponding sticker in the target preview window according to the sticker setting information, the instructions in the program are specifically configured to perform the following operations: determining a target paster according to the paster setting information; caching the target paster in the texture corresponding to the current target preview window, wherein the texture is a container for caching image data; acquiring a label of a display area of the current preview window, wherein the label of the display area is used for identifying the display area of the current preview window; copying the image data in the texture in a target texture area according to the mark of the display area so that the target sticker is displayed in the display area of the current preview window, wherein the target texture area corresponds to the display area of the current preset window.
In one possible example, in the aspect of recognizing the expression of the user according to the facial image in the target preview window, the instructions in the foregoing program are specifically configured to perform the following operations: matching the face image in the target preview window with a plurality of preset image templates in a preset image template library to obtain a plurality of matching values; and determining the target expression corresponding to the preset image template with the maximum matching value as the expression of the user according to the corresponding relation between the preset image template and the expression.
In one possible example, in the aspect of obtaining the sticker setting information of the target preview window, the instructions in the program are specifically configured to: acquiring a target gesture of a user according to the camera; determining the paster setting information of the target preview window according to the target gesture; or determining the sticker setting information of the target preview window according to a touch instruction of a user.
In a possible example, after the freezing the image of the target preview window when the expression is a preset expression, the instructions in the above program are further configured to: acquiring a shooting instruction; and shooting the frozen target preview window according to the shooting instruction to generate a picture.
Fig. 6 shows a block diagram of a possible functional unit of the window control apparatus according to the above embodiment. The window control apparatus 600 includes a display unit 601, an acquisition unit 602, an acquisition unit 603, a recognition unit 604, and a freeze frame unit 605, wherein,
the display unit 601 is configured to display a preview window, where the preview window is a plurality of preview windows, and each preview window is used to display a facial image of a single user or a plurality of users;
the acquisition unit 602 is configured to acquire face data of a user through a camera, and display a face image of the user in a target preview window according to the face data, where the target preview window is at least one preview window of the multiple preview windows;
the obtaining unit 603 is configured to obtain sticker setting information of the target preview window;
the display unit 601 is further configured to display a corresponding sticker in the target preview window according to the sticker setting information;
the recognition unit 604 is configured to recognize an expression of a user according to the facial image in the target preview window;
the identifying unit 604 is further configured to determine whether the expression is a preset expression;
the freezing unit 605 is configured to freeze the image of the target preview window when the expression is a preset expression.
In a possible example, when the corresponding sticker is displayed in the target preview window according to the sticker setting information, the display unit 601 is specifically configured to: determining a target paster according to the paster setting information; caching the target paster in the texture corresponding to the current target preview window, wherein the texture is a container for caching image data; acquiring a label of a display area of the current preview window, wherein the label of the display area is used for identifying the display area of the current preview window; copying the image data in the texture in a target texture area according to the mark of the display area so that the target sticker is displayed in the display area of the current preview window, wherein the target texture area corresponds to the display area of the current preset window.
In a possible example, the recognizing unit 604 is specifically configured to recognize an expressive aspect of the user according to the facial image in the target preview window: matching the face image in the target preview window with a plurality of preset image templates in a preset image template library to obtain a plurality of matching values; and determining the target expression corresponding to the preset image template with the maximum matching value as the expression of the user according to the corresponding relation between the preset image template and the expression.
In a possible example, in terms of obtaining the sticker setting information of the target preview window, the obtaining unit 603 is specifically configured to: acquiring a target gesture of a user according to the camera; determining the paster setting information of the target preview window according to the target gesture; or determining the sticker setting information of the target preview window according to a touch instruction of a user.
In a possible example, the window control apparatus 600 further includes a capturing unit 606, and after the image of the target preview window is frozen when the expression is a preset expression, the capturing unit 606 is specifically configured to: acquiring a shooting instruction; and shooting the frozen target preview window according to the shooting instruction to generate a picture.
Fig. 7 is a block diagram illustrating a partial structure of an electronic device provided in an embodiment of the present invention. As shown in fig. 7, the electronic device 710 may include control circuitry, which may include storage and processing circuitry 730. The storage and processing circuit 730 may be a memory, such as a hard disk drive memory, a non-volatile memory (e.g., a flash memory or other electronically programmable read only memory used to form a solid state drive, etc.), a volatile memory (e.g., a static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry 730 may be used to control the operation of the electronic device 710. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 730 may be used to run software in the electronic device 710, such as an internet browsing application, a Voice Over Internet Protocol (VOIP) phone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality implemented based on a status indicator such as a status indicator light of a light emitting diode, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 710, to name a few.
The electronic device 710 may also include input-output circuitry 742. The input-output circuit 742 may be used to enable the electronic device 710 to input and output data, i.e., to allow the electronic device 710 to receive data from external devices and also to allow the electronic device 710 to output data from the electronic device 710 to external devices. The input-output circuit 742 may further include a sensor 732. The sensors 732 may include ambient light sensors, light and capacitance based proximity sensors, touch sensors (e.g., light and/or capacitance based touch sensors, ultrasonic sensors, wherein the touch sensors may be part of a touch screen display or may be used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
Input-output circuit 742 may also include one or more displays, such as display 714. The display 714 may include one or a combination of liquid crystal displays, organic light emitting diode displays, electronic ink displays, plasma displays, displays using other display technologies. Display 714 may include an array of touch sensors (i.e., display 714 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The electronic device 710 may also include an audio component 736. The audio component 736 can be utilized to provide audio input and output functionality for the electronic device 710. Audio components 736 in electronic device 710 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sound.
The communication circuit 738 may be used to provide the electronic device 710 with the capability to communicate with external devices. The communication circuitry 738 may include analog and digital input-output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in the communication circuitry 738 may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless communication circuitry in the communication circuitry 738 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit 738 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 738 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and so forth.
The electronic device 710 may further include a battery, power management circuitry, and other input-output units 740. Input-output unit 740 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may enter commands through the input-output circuitry 742 to control operation of the electronic device 710 and may use output data of the input-output circuitry 742 to enable receipt of status information and other outputs from the electronic device 710.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: a flash disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
The foregoing embodiments of the present invention have been described in detail, and the principles and embodiments of the present invention are explained herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A window control method, comprising:
displaying a preview window, wherein the preview window is a plurality of preview windows, and each preview window is used for displaying facial images of a single user or a plurality of users;
acquiring face data of a user through a camera, and displaying a face image of the user in a target preview window according to the face data, wherein the target preview window is at least one preview window in the plurality of preview windows;
acquiring a target gesture of a user through the camera, and determining and acquiring the paster setting information of the target preview window according to the target gesture, wherein the image of the paster corresponds to the target gesture;
displaying a corresponding paster in the target preview window according to the paster setting information;
recognizing the expression of a user according to the facial image in the target preview window;
judging whether the expression is a preset expression or not;
and when the expression is a preset expression, freezing one of the preview windows as the image of the target preview window, wherein the dynamic image of the target preview window is subjected to real-time freezing according to the expression of the user to obtain an image corresponding to the sticker, and the image of the target window is frozen when the operation of the photographing instruction is not received.
2. The method of claim 1, wherein displaying the corresponding sticker in the target preview window according to the sticker setting information comprises:
determining a target paster according to the paster setting information;
caching the target paster in the texture corresponding to the current target preview window, wherein the texture is a container for caching image data;
acquiring a label of a display area of a current preview window, wherein the label of the display area is used for identifying the display area of the current preview window;
copying the image data in the texture in a target texture area according to the mark of the display area so that the target sticker is displayed in the display area of the current preview window, wherein the target texture area corresponds to the display area of the current preview window.
3. The method of claim 1, wherein identifying the expression of the user based on the facial image in the target preview pane comprises:
matching the face image in the target preview window with a plurality of preset image templates in a preset image template library to obtain a plurality of matching values;
and determining the target expression corresponding to the preset image template with the maximum matching value as the expression of the user according to the corresponding relation between the preset image template and the expression.
4. The method of claim 1, wherein obtaining the sticker setting information for the target preview window further comprises: and determining the paster setting information of the target preview window according to the touch instruction of the user.
5. The method of any one of claims 1-4, further comprising, after freezing the image of the target preview window when the expression is a preset expression:
acquiring a shooting instruction;
and shooting the frozen target preview window according to the shooting instruction to generate a picture.
6. A window control device is applied to electronic equipment and is characterized by comprising a display unit, a collection unit, an acquisition unit, an identification unit and a freezing unit,
the display unit is used for displaying a preview window, the preview window is a plurality of preview windows, and each preview window is used for displaying facial images of a single user or a plurality of users;
the acquisition unit is used for acquiring the face data of a user through a camera and displaying a face image of the user in a target preview window according to the face data, wherein the target preview window is at least one preview window in the plurality of preview windows;
the acquisition unit is used for acquiring a target gesture of a user through the camera and determining and acquiring the paster setting information of the target preview window according to the target gesture, wherein the image of the paster corresponds to the target gesture;
the display unit is also used for displaying the corresponding paster in the target preview window according to the paster setting information;
the recognition unit is used for recognizing the expression of the user according to the facial image in the target preview window;
the identification unit is also used for judging whether the expression is a preset expression or not;
and the freezing unit is used for freezing one of the preview windows as the image of the target preview window when the expression is a preset expression, wherein the dynamic image of the target preview window is frozen in real time according to the expression of the user to obtain the image corresponding to the sticker, and the image of the target window is frozen when the operation of the photographing instruction is not received.
7. The apparatus according to claim 6, wherein in displaying the corresponding sticker in the target preview window according to the sticker setting information, the display unit is specifically configured to:
determining a target paster according to the paster setting information;
caching the target paster in the texture corresponding to the current target preview window, wherein the texture is a container for caching image data;
acquiring a label of a display area of a current preview window, wherein the label of the display area is used for identifying the display area of the current preview window;
copying the image data in the texture in a target texture area according to the mark of the display area so that the target sticker is displayed in the display area of the current preview window, wherein the target texture area corresponds to the display area of the current preview window.
8. The apparatus according to claim 6, wherein, in said recognizing the expression of the user from the facial image in the target preview pane, the recognizing unit is specifically configured to:
matching the face image in the target preview window with a plurality of preset image templates in a preset image template library to obtain a plurality of matching values;
and determining the target expression corresponding to the preset image template with the maximum matching value according to the corresponding relation between the preset image template and the expression.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method according to any one of claims 1-5, the computer comprising an electronic device.
CN201810623736.2A 2018-06-15 2018-06-15 Window control method and related product Active CN110611732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810623736.2A CN110611732B (en) 2018-06-15 2018-06-15 Window control method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810623736.2A CN110611732B (en) 2018-06-15 2018-06-15 Window control method and related product

Publications (2)

Publication Number Publication Date
CN110611732A CN110611732A (en) 2019-12-24
CN110611732B true CN110611732B (en) 2021-09-03

Family

ID=68888603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810623736.2A Active CN110611732B (en) 2018-06-15 2018-06-15 Window control method and related product

Country Status (1)

Country Link
CN (1) CN110611732B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669502B (en) * 2020-06-19 2022-06-24 北京字节跳动网络技术有限公司 Target object display method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN106791364A (en) * 2016-11-22 2017-05-31 维沃移动通信有限公司 Method and mobile terminal that a kind of many people take pictures
CN107995429A (en) * 2017-12-22 2018-05-04 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108012091A (en) * 2017-11-29 2018-05-08 北京奇虎科技有限公司 Image processing method, device, equipment and its storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7519223B2 (en) * 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
CN102638654B (en) * 2012-03-28 2015-03-25 华为技术有限公司 Method, device and equipment for outputting multi-pictures
CN102854983B (en) * 2012-09-10 2015-12-02 中国电子科技集团公司第二十八研究所 A kind of man-machine interaction method based on gesture identification
US9760262B2 (en) * 2013-03-15 2017-09-12 Microsoft Technology Licensing, Llc Gestures involving direct interaction with a data visualization
CN106998423A (en) * 2016-01-26 2017-08-01 宇龙计算机通信科技(深圳)有限公司 Image processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN106791364A (en) * 2016-11-22 2017-05-31 维沃移动通信有限公司 Method and mobile terminal that a kind of many people take pictures
CN108012091A (en) * 2017-11-29 2018-05-08 北京奇虎科技有限公司 Image processing method, device, equipment and its storage medium
CN107995429A (en) * 2017-12-22 2018-05-04 维沃移动通信有限公司 A kind of image pickup method and mobile terminal

Also Published As

Publication number Publication date
CN110611732A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110020622B (en) Fingerprint identification method and related product
CN109614865B (en) Fingerprint identification method and related product
CN108710458B (en) Split screen control method and terminal equipment
CN109240577B (en) Screen capturing method and terminal
CN108307106B (en) Image processing method and device and mobile terminal
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108833779B (en) Shooting control method and related product
CN109495616B (en) Photographing method and terminal equipment
CN107566746B (en) Photographing method and user terminal
JP2016511875A (en) Image thumbnail generation method, apparatus, terminal, program, and recording medium
CN109753202B (en) Screen capturing method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
CN108462826A (en) A kind of method and mobile terminal of auxiliary photo-taking
CN108718389B (en) Shooting mode selection method and mobile terminal
CN110519503B (en) Method for acquiring scanned image and mobile terminal
CN110933312B (en) Photographing control method and related product
CN110225282B (en) Video recording control method, device and computer readable storage medium
CN108647566B (en) Method and terminal for identifying skin characteristics
CN108121583B (en) Screen capturing method and related product
CN105513098B (en) Image processing method and device
CN108510266A (en) A kind of Digital Object Unique Identifier recognition methods and mobile terminal
CN110611732B (en) Window control method and related product
CN110796673B (en) Image segmentation method and related product
CN109376701B (en) Fingerprint identification method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant