WO2019162842A1 - A system and a method for customizing an image based on facial expressions - Google Patents

A system and a method for customizing an image based on facial expressions Download PDF

Info

Publication number
WO2019162842A1
WO2019162842A1 PCT/IB2019/051360 IB2019051360W WO2019162842A1 WO 2019162842 A1 WO2019162842 A1 WO 2019162842A1 IB 2019051360 W IB2019051360 W IB 2019051360W WO 2019162842 A1 WO2019162842 A1 WO 2019162842A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
image
facial expression
face
score
Prior art date
Application number
PCT/IB2019/051360
Other languages
French (fr)
Inventor
Vipul SAXENA
Kavin Bharti MITTAL
Original Assignee
Hike Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hike Private Limited filed Critical Hike Private Limited
Publication of WO2019162842A1 publication Critical patent/WO2019162842A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression

Abstract

The present invention provides methods and systems for customising an image based on at least one facial expression of at least one user. In a preferred embodiment, the method comprises analysis of the image for detecting at least one object and subsequently identifying at least one face based on the detection of the at least one object. Further, at least one score is generated based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face. Furthermore, at least one mask is superimposed on the at least one object of the image to customise the image, wherein the at least one mask is superimposed based on the at least one score.

Description

A SYSTEM AND A METHOD FOR CUSTOMIZING AN IMAGE BASED ON FACIAL EXPRESSIONS
FIELD OF THE INVENTION
The present invention generally relates to social networking and more particularly, to a system and method for customizing an image based on facial expressions.
BACKGROUND
With the increasing pace of advancement in communications technology, particularly online mode of communicating messages between the users through mobile application/s, a majority of social networking applications have become widespread with additional features. These mobile applications, with improved technical aspects, are widely used and have proved to be a convenient and useful means of communication for the users.
Since text messages are easier and convenient for the users, most of the social networking applications are typically text-based and provide electronic message applications such as emails, short message service (SMS) text, etc. However, such text based applications poses various limitations. For example, in many situations, using text may not be appropriate, since it becomes difficult to convey feelings/emotions through the text. In another example, text may not be apt for old people who may not be comfortable to type any such text message.
Thus, in light of above, in the current scenario, social networking applications are intending to provide various forms of visual expressions and coloured animations/icons such as emojis, emoticons, stickers, GIFs and other such media content, thereby providing a more user-friendly and easier way of communication. Also, said social networking applications provide options for using the emojis, emoticons, stickers, etc. in combination with text for better understanding. This in turn describes user's real time emotions and feelings in a more accurate and appealing/attractive manner. However, such emojis, emoticons, stickers, etc., used either separately or in conjunction with text, are still unable to satisfy user's needs and expectations because of longer time taken by the user in conveying the message/emotions/feelings. Also, said social networking applications were unable to recommend more options/emoticons etc. related to the conversations between the users
Another useful feature of the social networking applications is sharing of media items such as images, videos, etc. However, existing applications are unable to efficiently suggest/recommend relevant and appropriate emoticons/ stickers for such images. Another limitation is that while using said social networking applications, the users themselves may have to manually search for relevant emoticons/emoji/stickers in the entire database or may have to manually customise the face, if detected in the image, thereby making it a more time consuming process.
Accordingly, in order to overcome the aforementioned problems inherent in the existing social networking applications, there exists a need of an efficient mechanism for customising an image based on facial expression/s of users. The above-mentioned information in the background section is only intended to enhance the understanding of the reader with respect to the field to which the present invention pertains. Therefore, unless explicitly stated otherwise, any of the features or aspects discussed above should not be construed as prior art merely because of its inclusion in this section.
SUMMARY
This section is provided to introduce certain objects and aspects of the present invention in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
In view of the drawbacks and limitations of the prior art systems, one object of the present invention is to provide system and method for customising an image based on a facial expression a user and providing the same to user without the need of user to explicitly download such customised image. Another object of the invention is to provide system and method for providing customised images to the user such that they are organised in an easy and convenient manner and therefore, easily accessible to the user. Yet another object of the invention is to provide the customised image to the user without the user having to manually search for such relevant and/or customised images in the entire database.
In view of above mentioned objects and other objects, one aspect of the invention relates to a method for customizing an image based on at least one facial expression of at least one user. The method comprises analysing the image to detect at least one object; identifying at least one face based on the detection of the at least one object, wherein the at least one face corresponds to the at least one user having the at least one facial expression, and the at least one face comprises the at least one object. Further, the method comprises generating at least one score based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face; followed by the superimposing at least one mask on the at least one object of the image to customize the image, wherein the at least one mask is superimposed based on the at least one score. Another aspect of the invention relates to a system for customising an image based on at least one facial expression of at least one user. The system comprises a detecting unit, a processing unit and a recommendation unit. The detecting unit is configured to analyse the image to detect at least one object; identify at least one face based on the detection of the at least one object, wherein the at least one face corresponds to the at least one user having the at least one facial expression, and the at least one face comprises the at least one object. The processing unit is configured to generate at least one score based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face. The recommendation unit is configured to superimpose at least one mask on the at least one object of the image to customise the image, wherein the at least one mask is superimposed based on the at least one score.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this invention, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that invention of such drawings includes invention of electrical components or circuitry commonly used to implement such components. The connections between the sub-components of a component have not been shown in the drawings for the sake of clarity, therefore, all sub components shall be assumed to be connected to each other unless explicitly otherwise stated in the disclosure herein.
Figure 1 illustrates a general overview of the system for customizing an image based on at least one facial expression of at least one user, in accordance with first exemplary embodiment of the present invention.
Figure 2 illustrates a block diagram of a user device for customizing an image based on at least one facial expression of at least one user, in accordance with second exemplary embodiment of the present invention.
Figure 3 illustrates a method for customizing an image based on at least one facial expression of at least one user, in accordance with the first exemplary embodiment of the present invention. Figure 4 illustrates a method for customizing an image based on at least one facial expression of at least one user, in accordance with second exemplary embodiments of the present invention. The foregoing shall be more apparent and clear from the following more detailed explanation of the invention and the afore-mentioned drawings.
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent, however, that embodiments of the present invention may be practiced without these specific details or with additional details that may be obvious to a person skilled in the art. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
The present invention relates to a system and a method for customising an image based on at least one facial expression of at least one user. When the system receives an image from a user (or user device) on any social networking application, the system detects faces from the image to further identify facial expressions of each detected face. Subsequently, the system generates a score based on each facial expression to accordingly superimpose a mask on the image in order to customise the image, wherein superimposing may also include but not limited to, adding stickers to the image, changing the background and adding filters. In first embodiment, the system is implemented at a central server alone, while in second embodiment, the system is implemented at the user device alone. In yet another embodiment, the system is implemented partially at the central server and partially at the user device.
As used herein, the "social networking application" refers to a mobile or web application for social networking, wherby users can interact with each other by means of text, audio, video or a combination thereof. In a preferred embodiment, the social networking application is considered to be an instant messaging application that provides various enhanced features such as viewing and sharing content/text etc., read news, play games, shop, make payments and any other features as may be obvious to a person skilled in the art. Said social networking application may be integrated/employed on any computing device, wherein said computing device may include, but not limited to, a mobile phone, a smartphone, laptop, personal digital assistant, tablet computer, general purpose computer, or any other computing device as may be obvious to a person skilled in the art. As used herein, the "user device" refers to any electrical, electronic, electromechanical and computing device. The user device may include, but not limited, to a mobile phone, a smartphone, laptop, personal digital assistant, tablet computer, general purpose computer, or any other computing device as may be obvious to a person skilled in the art. Further, the user device may comprise an input means such as a keyboard, an operating system, a display interface, etc.
As used herein, the "image" comprises at least one single frame, a GIF frame and a multi-frame video. Further, the image comprises at least one object. The image may comprise at least one face (human or animal), wherein the at least one face comprises the at least one object. For example, a face may constitute a plurality of objects such as eyes, nose, ears, lips, etc.; therefore, a plurality of objects when combined together may form a face.
As used herein, the "facial expressions" is a state or position of the face that represents an emotional state, including but not limited to, an anger expression, a contempt expression, a disgust expression, a fear expression, a happiness expression and a neutral expression, a sad expression and a surprise expression.
As used herein, the "mask" denotes a graphic image used in the social networking application or a social network service, for expressing an emotion or an action through animations, emojis, GIFs, stickers, cartoons, text and a combination thereof. Further, the masks may also be used during chats (including one-to-one and group chats), while sharing content such as images, video etc. on the social networking application, etc. Furthermore, the mask may also contain a collection of stickers that may be related to each other by way of a common theme, type, emotion, festival, etc. There may be different sticker collections for different themes such as love, happiness, anger, sadness etc. The process of generation of the mask/s and/or sticker collection from various packs has been described in the subsequent paragraphs.
As used herein, "connect", "configure", "couple" and its cognate terms, such as "connects", "connected", "configured" and "coupled" may include a physical connection (such as a wired/wireless connection), a logical connection (such as through logical gates of semiconducting device), other suitable connections, or a combination of such connections, as may be obvious to a skilled person.
As used herein, "send", "transfer", "transmit", and their cognate terms like "sending", "sent", "transferring", "transmitting", "transferred", "transmitted", etc. include sending or transporting data or information including mask from one unit or component to another unit or component, wherein the mask may or may not be modified before or after sending, transferring, transmitting.
Figure 1 illustrates a general overview of the system [100] for customizing an image based on at least one facial expression of at least one user, in accordance with first exemplary embodiment of the present invention, wherein the present system may be implemented at the central server [130] As shown in the figure 1, the system [100] comprises the user device [110] and the central server [130] The user device [110] and the central server [130] are configured to communicate with each other through a network entity [120], wherein the network entity [120] may be wired or wireless and may correspond to a personal area network, a local area network, a metropolitan area network, a wide area network, the Internet, or a combination thereof. In a preferred embodiment, the central server [130] may be a cloud server. Further, the user device [110] comprises a communication unit [102] and a display unit [104] along with other units/modules including, but not limiting to, input means and output means. The central server [130] comprises a communication unit [112], a detecting unit [114], a processing unit [116] and a recommendation unit [118], wherein the communication unit [112], the detecting unit [14] and the recommendation unit [118] are connected to the processing unit [116]
When a user uploads an image, the communication unit [102] of the user device [110] transmits the image to the central server [130] through the network entity [120] On receiving the image, the communication unit [112] of the central device [130] is configured to transmit said image to the detecting unit [114] of the central device [130] The detecting unit [114] is configured to analyse the image in order to detect at least one object. The detecting unit [114] is further configured to identify the at least one face based on the detection of the at least one object, wherein the at least one face corresponds to at least one user having at least one facial expression. In an exemplary embodiment, the at least one face may constitute a plurality of objects such as eyes, nose, ears, lips, etc.; therefore, a plurality of objects when combined together may form the at least one face.
On receiving the detected at least one face, the processing unit [116] is configured to generate at least one score based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face. In an embodiment, the processing unit [116] is configured to assign at least one tag to the at least one facial expression.
The processing unit [116] is further configured to transmit the generated at least one score of each of the at least one face to the recommendation unit [118] The recommendation unit [118] is configured to compare at least one assigned tag with at least one pre-defined tag, wherein the at least one pre-defined tag corresponds to the at least one mask. Further, the recommendation unit [118] is configured to recommend the at least one mask based on the at least one score of the at least one face and said comparison. Additionally, the recommendation unit [118] is configured to select the at least one mask in order to superimpose said at least one mask on the at least one object, wherein the at least one mask is emoji, GIF, sticker, cartoon, text and a combination thereof. In an exemplary embodiment, where the facial expressions of a face resembles happiness, in such event, the face may be superimposed by a happy mask or a happy quote or background of the face may be brightened to portray cheerfulness/joy.
In an embodiment where at least two users (at least two faces) are identified, the processing unit [116] is configured to prioritize the at least one facial expression of the at least two users based on the at least one score corresponding to the at least one facial expression. In an exemplary embodiment, when the user inputs the user captured image to the processing unit [116], the processing unit [116] computes a score for each possible human expression; for instance, the anger facial expression may be allocated a score of 0.0955, while happiness facial expression may be a allocated a score of 3.99. Therefore, the expressions having the highest score is given higher priority. Subsequently, the recommendation unit [118] is configured to recommend the at least one mask based on the priority of the at least one score of the at least two faces. Therefore, if the facial expression resembling anger has higher priority than the facial expression resembling happiness, in such event, the recommendation unit [118] is configured to first recommend anger related masks before recommending happiness related masks. In an exemplary event of three faces with three facial expressions, two of the facial expressions resembles happiness while the third facial expression resembles sadness, the recommendation unit [118] is configured to recommend the masks based on the majority and thus, the recommendation unit [118] is configured to recommend happy related masks.
Further, the recommendation unit [118] is configured to transmit suggestions i.e. superimposed image to the communication unit [112] of the central server [130] that is further configured to share said superimposed image to the display unit [104] of the user device [110]. The superimposed image and the suggestions are finally displayed to the user using the display unit [104]
The system [100] further comprises the at least one memory for storing at least one of the suggestions/recommendations i.e. the at least one masks, the pre-defined tags and the comparison of the pre-defined tags and assigned tags, wherein said storage is in the form of tables, schemas, or other elements as may be obvious to a person skilled in the art. Further, the at least one memory may be present in at least one of the user device [110] and the central server [130] The at least one memory may include, but not limited to, a random access memory (RAM), read only memory (ROM), a hard disk drive (HDD), a secure digital (SD) card or any such memory as may be obvious to person skilled in the art.
Additionally, figure 2 illustrates block diagram of the user device [110] for customizing the image based on the at least one facial expression of the at least one user, in accordance with second exemplary embodiment of the present invention, wherein the present system may be implemented at the user device [110]. As shown in the figure 2, the communication unit [102] of the user device [110] transmits the image uploaded by the user to the detecting unit [202] of the user device [110]. The detecting unit [202] is then configured to analyse the image for detecting the at least one object and identifying the at least one face based on the detection of the at least one object, wherein the at least one face corresponds to at least one user having at least one facial expression. The at least one face comprises the at least one object. Subsequently, on receiving the detected at least one face, the processing unit [204] is configured to generate at least one score based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face. In an embodiment, the processing unit [204] is configured to assign at least one tag to the at least one facial expression. The processing unit [116] is further configured to transmit the generated at least one score of each of the at least one face to the recommendation unit [206] that is configured to compare at least one assigned tag with at least one pre-defined tag, wherein the at least one pre-defined tag corresponds to the at least one mask. Further, the recommendation unit [206] is configured to recommend the at least one mask based on the at least one score of the at least one face and said comparison. Additionally, the recommendation unit [206] is configured to select the at least one mask in order to superimpose said at least one mask on the at least one object, wherein the at least one mask is emoji, GIF, sticker, cartoon, text and a combination thereof. In an embodiment where at least two users (at least two faces) are identified, the processing unit [204] of the user device [110] is configured to prioritize the at least one facial expression of the at least two users based on the at least one score corresponding to the at least one facial expression. Subsequently, the recommendation unit [206] is configured to recommend the at least one mask based on the priority of the at least one score of the at least two faces. Therefore, if the facial expression resembling anger has higher priority than the facial expression resembling happiness, in such event, the recommendation unit [206] is configured to first recommend anger related masks before recommending happiness related masks. In an exemplary event of three faces with three facial expressions, two of the facial expressions resembles happiness while the third facial expression resembles sadness, the recommendation unit [206] is configured to recommend the masks based on the majority and thus, the recommendation unit [206] is configured to recommend happy related masks.
Further, the recommendation unit [206] is configured to transmit the suggestions i.e. superimposed image to the communication unit [102] and further to the display unit [104] of the user device [110]. The superimposed image and the suggestions are finally displayed to the user using the display unit [104]
As used herein, a "processing unit" of the central device [130] and the user device [102] includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present invention.
Figure 3 illustrates a method for customizing the image based on the at least one facial expression of at least one user, in accordance with the first exemplary embodiment of the present invention. The following method [300] includes detailed steps involved in customizing the image, wherein the method [300] may be implemented at the central server [130] and may initiate at step 302 where the user (using the user device [110]) uploads the image and the communication unit [102] of the user device [110] transmits the image to the communication unit [112] of the central server [130] through the network entity [120]
At step 304, the communication unit [112] of the central device [130] is configured to transmit said image to the detecting unit [114] of the central device [130]
At step 306, the detecting unit [114] is configured to analyse the image for detecting the at least one object, wherein said analysis is based on several techniques and methods for detection of objects, already known in the existing art and obvious to a person skilled in the art. At step 308, the detecting unit [114] is further configured to identify the at least one face based on the detection of the at least one object, wherein the at least one face corresponds to at least one user having at least one facial expression. The at least one face comprises the at least one object. In an embodiment, a plurality of objects when combined together may form the at least one face.
At step 310, the detecting unit [114] is configured to transmit said identified at least one face to the processing unit [116].
At step 312, on receiving the detected at least one face, the processing unit [116] is configured to generate at least one score based on the at least one facial expressions, wherein each of the at least one score corresponds to each of the at least one face. In an exemplary embodiment, said at least one score is generated based on several techniques and methods, already known in the existing art and obvious to a person skilled in the art. Further, in an embodiment, the processing unit [116] is configured to assign at least one tag to the at least one facial expression, wherein each of the at least one tag is compared with the at least one score.
At step 314, the processing unit [116] is further configured to transmit the generated at least one score of each of the at least one face to the recommendation unit [118].
At step 316, the recommendation unit [118] is configured to compare at least one assigned tag with at least one pre-defined tag, wherein the at least one pre-defined tag corresponds to the at least one mask.
At step 318, the recommendation unit [118] is configured to recommend the at least one mask based on the at least one score of the at least one face and said comparison.
At step 320, the recommendation unit [118] is configured to select the at least one mask in order to superimpose said at least one mask on the at least one object, wherein the at least one mask is emoji, GIF, sticker, cartoon, text and a combination thereof. In an exemplary embodiment, where the facial expressions of a face resembles happiness, in such event, the face may be superimposed by a happy mask or a happy quote or background of the face may be brightened to portray cheerfulness/joy.
At step 322, the recommendation unit [118] is configured to transmit the suggestions i.e. superimposed image to the display unit [104] of the user device [110] by transmitting the same to the communication unit [112] of the central server [130]
At step 324, the display unit [104] of the user device [110] is configured to display the suggestions and superimposed image to the user on a user interface of the user device. The method [300] then terminates at step 326.
Figure 4 illustrates a method for customizing an image based on at least one facial expression of at least one user, in accordance with the second exemplary embodiment of the present invention. The following method [400] includes detailed steps involved in customizing the image, wherein the method [400] may be implemented at the user device [110] and may initiate at step 302 where the user (using the user device [110]) uploads the image.
At step 404 and after accomplishment of step 402, the communication unit [102] of the user device [110] transmits the image to the detecting unit [202] of the user device [110].
At step 406, the detecting unit [202] is then configured to analyse the image for detecting the at least one object.
At step 408, the detecting unit [202] is then configured to identify the at least one face based on the detection of the at least one object, wherein the at least one face corresponds to at least one user having at least one facial expression. The at least one face comprises the at least one object.
At step 410, the detecting unit [202] is configured to transmit said identified at least one face to the processing unit [204] of the user device [110].
At step 412, on receiving the detected at least one face, said processing unit [204] is configured to generate at least one score based on the at least one facial expressions, wherein each of the at least one score corresponds to each of the at least one face. In an embodiment, the processing unit [204] is configured to assign at least one tag to the at least one facial expression.
At step 414, the processing unit [204] is further configured to transmit the generated at least one score of each of the at least one face to the recommendation unit [206] of the user device. At step 416, the recommendation unit [204] is configured to compare at least one assigned tag with at least one pre-defined tag, wherein the at least one pre-defined tag corresponds to the at least one mask.
At step 418, the recommendation unit [204] is configured to recommend the at least one mask based on the at least one score of the at least one face and said comparison.
At step 420, the recommendation unit [204] is configured to select the at least one mask in order to superimpose said at least one mask on the at least one object, wherein the at least one mask is emoji, GIF, sticker, cartoon, text and a combination thereof. In an exemplary embodiment, where the facial expressions of a face resembles happiness, in such event, the face may be superimposed by a happy mask or a happy quote or background of the face may be brightened to portray cheerfulness/joy.
At step 422, the recommendation unit [204] is configured to transmit the suggestions i.e. superimposed image to the display unit [104] of the user device [110]. At step 424, the display unit [104] of the user device [110] is configured to display the suggestions and superimposed image to the user. The method [400] then terminates at step 426.
The various elements of the present invention as discussed above may be present in the form of a hardware or a software or a hardware-software combination for performing functions and/or operations for generating customised sticker collections. The server component 202 and the client component 204 may include a bus or other communication mechanism for communicating information, and a processor coupled with the bus for processing information and data or set of data.
In an embodiment, the techniques described herein are implemented by one or more special- purpose computing devices that may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques.
Though a limited number of the user device [110], the central server [130], the network entity [120] and the components/sub systems therein, have been shown in the figures; however, it will be appreciated by those skilled in the art that the system [100] of the present invention encompasses any number and varied types of said entities/elements, the user device [110], the central server [130], the network entity [120] and the components/sub systems therein.
While the invention has been explained with reference to certain embodiments and examples, it will be appreciated that various changes can be made to the embodiments without departing from the principles of the present invention, and all such changes and embodiments are encompassed by the present invention.

Claims

We claim:
1. A method for customising an image based on at least one facial expression of at least one user, the method comprising:
- analysing the image to detect at least one object;
- identifying at least one face based on the detection of the at least one object, wherein
the at least one face corresponds to the at least one user having the at least one facial expression, and
the at least one face comprises the at least one object;
- generating at least one score based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face; and
- superimposing at least one mask on the at least one object of the image to customise the image, wherein the at least one mask is superimposed based on the at least one score.
2. The method as claimed in claim 1, further comprising prioritising the at least one facial expression based on the at least one score corresponding to the at least one facial expression of the at least one user, wherein the at least one facial expression is prioritised in an event of identification of at least two users.
3. The method as claimed in claim 1, further comprising assigning at least one tag to the at least one facial expression.
4. The method as claimed in claim 1, further comprising recommending at least one mask based on the at least one score of the at least one face, wherein the at least one mask is recommended based on a comparison of at least one assigned tag with at least one pre defined tag.
5. The method as claimed in claim 4, wherein the at least one pre-defined tag corresponds to the at least one mask.
6. The method as claimed in claim 1, further comprising selecting the at least one mask for superimposing the at least one mask on the at least one object of the image.
7. The method as claimed in claim 1, wherein the image comprises at least one of a single frame, a GIF frame and a multi-frame video.
8. The method as claimed in claim 1, wherein the at least one facial expression resembles at least one of an anger expression, a contempt expression, a disgust expression, a fear expression, a happiness expression and a neutral expression, a sad expression and a surprise expression.
9. The method as claimed in claim 1, wherein the at least one mask comprises at least one of a text, an emoticon, a theme and a filter.
10. A system for customising an image based on at least one facial expression of at least one user, the system comprising:
- a detecting unit [202, 114] configured to:
analyse the image to detect at least one object;
identify at least one face based on the detection of the at least one object, wherein
the at least one face corresponds to the at least one user having the at least one facial expression, and
the at least one face comprises the at least one object;
- a processing unit [204, 116] configured to generate at least one score based on the at least one facial expression, wherein each of the at least one score corresponds to each of the at least one face; and
- a recommendation unit [206, 118] configured to superimpose at least one mask on the at least one object of the image to customise the image, wherein the at least one mask is superimposed based on the at least one score.
11. The system as claimed in claim 10, wherein the processing unit [204, 116] is further configured to:
- prioritise the at least one facial expression based on the at least one score corresponding to the at least one facial expression of the at least one user, wherein the at least one facial expression is prioritised in an event of identification of at least two users; and
- assign at least one tag to the at least one facial expression.
12. The system as claimed in claim 10, wherein the recommendation unit [206, 118] is further configured to:
- recommend at least one mask based on the at least one score of the at least one face, wherein
the at least one mask is recommended based on a comparison of at least one assigned tag with at least one pre-defined tag, and
the at least one pre-defined tag corresponds to the at least one mask; and - select the at least one mask for superimposing the at least one mask on the at least one object of the image.
13. The system as claimed in claim 10, wherein the detecting unit [202, 114] and the recommendation unit [206, 118] are connected to the processing unit [204, 116].
PCT/IB2019/051360 2018-02-20 2019-02-20 A system and a method for customizing an image based on facial expressions WO2019162842A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201811006456 2018-02-20
IN201811006456 2018-02-20

Publications (1)

Publication Number Publication Date
WO2019162842A1 true WO2019162842A1 (en) 2019-08-29

Family

ID=67686749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/051360 WO2019162842A1 (en) 2018-02-20 2019-02-20 A system and a method for customizing an image based on facial expressions

Country Status (1)

Country Link
WO (1) WO2019162842A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11636662B2 (en) 2021-09-30 2023-04-25 Snap Inc. Body normal network light and rendering control
US11651572B2 (en) 2021-10-11 2023-05-16 Snap Inc. Light and rendering of garments
US11670059B2 (en) 2021-09-01 2023-06-06 Snap Inc. Controlling interactive fashion based on body gestures
US11673054B2 (en) 2021-09-07 2023-06-13 Snap Inc. Controlling AR games on fashion items
US11734866B2 (en) 2021-09-13 2023-08-22 Snap Inc. Controlling interactive fashion based on voice
US11900506B2 (en) * 2021-09-09 2024-02-13 Snap Inc. Controlling interactive fashion based on facial expressions

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9025835B2 (en) * 2011-10-28 2015-05-05 Intellectual Ventures Fund 83 Llc Image recomposition from face detection and facial features
WO2017058733A1 (en) * 2015-09-29 2017-04-06 BinaryVR, Inc. Head-mounted display with facial expression detecting capability

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9025835B2 (en) * 2011-10-28 2015-05-05 Intellectual Ventures Fund 83 Llc Image recomposition from face detection and facial features
WO2017058733A1 (en) * 2015-09-29 2017-04-06 BinaryVR, Inc. Head-mounted display with facial expression detecting capability

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11670059B2 (en) 2021-09-01 2023-06-06 Snap Inc. Controlling interactive fashion based on body gestures
US11673054B2 (en) 2021-09-07 2023-06-13 Snap Inc. Controlling AR games on fashion items
US11900506B2 (en) * 2021-09-09 2024-02-13 Snap Inc. Controlling interactive fashion based on facial expressions
US11734866B2 (en) 2021-09-13 2023-08-22 Snap Inc. Controlling interactive fashion based on voice
US11636662B2 (en) 2021-09-30 2023-04-25 Snap Inc. Body normal network light and rendering control
US11651572B2 (en) 2021-10-11 2023-05-16 Snap Inc. Light and rendering of garments

Similar Documents

Publication Publication Date Title
US11303590B2 (en) Suggested responses based on message stickers
WO2019162842A1 (en) A system and a method for customizing an image based on facial expressions
KR102050334B1 (en) Automatic suggestion responses to images received in messages, using the language model
CN106415664B (en) System and method for generating a user facial expression library for messaging and social networking applications
CN111557006B (en) Hybrid intelligent method for extracting knowledge about inline annotations
EP3713159B1 (en) Gallery of messages with a shared interest
US10154071B2 (en) Group chat with dynamic background images and content from social media
US10311916B2 (en) Gallery of videos set to an audio time line
US11455151B2 (en) Computer system and method for facilitating an interactive conversational session with a digital conversational character
US10275838B2 (en) Mapping social media sentiments
US10708203B2 (en) Systems and methods for indicating emotions through electronic self-portraits
CN109074523A (en) Unified message search
JP2020521995A (en) Analyzing electronic conversations for presentations on alternative interfaces
US10733496B2 (en) Artificial intelligence entity interaction platform
WO2007134402A1 (en) Instant messaging system
KR20130115177A (en) Method and system to share, synchronize contents in cross platform environments
US9577963B2 (en) Application for augmenting a message with emotional content
CN111052107A (en) Topic guidance in conversations
KR101567555B1 (en) Social network service system and method using image
JPWO2014141976A1 (en) Method of classifying users in social media, computer program, and computer
US20170111775A1 (en) Media messaging methods, systems, and devices
Afolaranmi Social Media and Marital Choices: Its Implications on Contemporary Marriage
Radulovic et al. Smiley ontology
US20240146673A1 (en) Method for correcting profile image in online communication service and apparatus therefor
US20230289740A1 (en) Management of in room meeting participant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19757183

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19757183

Country of ref document: EP

Kind code of ref document: A1