WO2020020269A1 - 一种自动匹配道具的拍照方法及装置 - Google Patents

一种自动匹配道具的拍照方法及装置 Download PDF

Info

Publication number
WO2020020269A1
WO2020020269A1 PCT/CN2019/097622 CN2019097622W WO2020020269A1 WO 2020020269 A1 WO2020020269 A1 WO 2020020269A1 CN 2019097622 W CN2019097622 W CN 2019097622W WO 2020020269 A1 WO2020020269 A1 WO 2020020269A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
information
props
user
prop
Prior art date
Application number
PCT/CN2019/097622
Other languages
English (en)
French (fr)
Inventor
陈建江
刘墨
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020020269A1 publication Critical patent/WO2020020269A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Definitions

  • the present invention relates to the technical field of mobile terminals, and in particular, to a photographing method and device for automatically matching props.
  • Smart phone photo stickers have been widely used by female users. Adding a headshot to a circle of friends on a self-portrait photo can greatly increase the fun and cuteness. Forwarding a circle of friends can resonate with friends and cause more topics. That is, when taking a picture, the user needs to add, decorate, and beautify the picture content.
  • the picture of a mobile phone is mainly that a user selects a picture when previewing a photo, and the picture tracks a face position, and when the picture is taken, the face and the picture are combined into a photo.
  • the picture needs to be manually selected by the user (when the user changes the picture, it needs to be selected from the candidate database), and the picture cannot be combined with the user's expression (user's joy and anger, picture Does not change).
  • the props of the sticker are mainly hats, ears, eyebrows, beards, blushes, etc., and their expressions are limited.
  • the technical problem solved by the solution provided by the embodiment of the present invention is that it cannot satisfy the diversity of decorative pictures when a user takes a picture.
  • the mobile terminal After the mobile terminal starts the camera and enters the state of the screen automatic decoration prop, the mobile terminal acquires a shooting scene including user image information and environmental information;
  • the mobile terminal superimposes the user image information and the item information to generate a composite image to be captured.
  • An acquisition module configured to acquire a shooting scene including user image information and environmental information after starting a camera and entering a state of a screen automatic decoration prop;
  • a selection module configured to select prop information that matches the shooting scene from a preset prop library according to the shooting scene
  • a generating module is configured to superimpose the user image information and the item information to generate a composite image to be captured.
  • a device for automatically taking pictures of props includes a processor and a memory coupled to the processor.
  • the memory stores a memory that can be run on the processor.
  • a computer storage medium provided according to an embodiment of the present invention stores a program for automatically matching a photo of a prop, and when the program of automatically matching a photo of a prop is executed by a processor, the automatic matching according to the embodiment of the present invention is implemented. Steps of the method of taking pictures of props.
  • the user activates the mobile phone camera and selects to enter the screen to automatically decorate the prop state.
  • the camera preview screen the user makes various poses, the camera obtains a preview picture, analyzes the human posture in the screen, and Automatically loads props that match the current scene.
  • the props are automatically matched.
  • the user needs to make a certain gesture, they can match the appropriate props.
  • the user wants to match his favorite props and needs to make realistic action gestures, which increases the interactivity of the user's shooting process.
  • the automatically matched props are likely to exceed the user's imagination, increase the user's sense of surprise, and avoid the trouble of manual selection by the user.
  • FIG. 1 is a flowchart of a photographing method for automatically matching props according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a photographing device for automatically matching props according to an embodiment of the present invention
  • FIG. 3 is a block diagram of a system for photographing an automatic matching prop provided by an embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for automatically taking pictures of props according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for photographing an automatic matching prop according to an embodiment of the present invention. As shown in FIG. 1, the method may include:
  • Step S101 The mobile terminal acquires a shooting scene including user image information and environment information after starting a camera and entering a state of a screen automatic decoration prop.
  • the user image information includes any one or a combination of the following: user facial characteristics, user gesture information, user body posture information, and user voice characteristics; and the environment information includes: shooting time, shooting location, shooting Season and shooting weather, etc .; wherein the shooting time includes any of the following: morning, morning, noon, afternoon, evening, and late night, etc .; the shooting season includes any of the following: spring, summer, autumn, winter, etc .; The shooting weather includes any of the following: sunny, wind, cloud, fog, rain, snow, frost, thunder and hail.
  • Step S102 According to the shooting scene, the mobile terminal selects prop information that matches the shooting scene from a preset prop library.
  • the selecting, by the mobile terminal, prop information that matches the shooting scene from a preset prop library according to the shooting scene includes: the mobile terminal separately compares user images in the shooting scene by Analyze information and environmental information to extract shooting key information in the shooting scene; the mobile terminal selects prop information that matches the shooting key information from a preset prop library according to the shooting key information; wherein The prop library stores a shooting prop relationship table corresponding to the correspondence between the shooting key information and the prop information.
  • the item information includes any or a combination of the following items: animal item information, plant item information, food item information, wear item information, and game tool information.
  • Step S103 The mobile terminal superimposes the user image information and the item information to generate a composite image to be captured.
  • the method further includes: when the mobile terminal receives a user's shooting instruction, generating a shooting source composite image and receiving a user's operation instruction; when the mobile terminal When the operation instruction is an image saving instruction, the mobile terminal uses the shooting source composite image as a shooting destination composite image, and saves the shooting destination composite image; when the operation instruction is an image editing instruction, the mobile terminal starts An image prop editing function, and by editing and adjusting the props in the composite image of the shooting source, a composite image of the shooting target is obtained and saved.
  • the mobile terminal editing and adjusting the props in the composite image of the shooting source includes any one or a combination of the following: adding processing to the props in the composite image of the shooting source; Delete processing of the props; position adjustment processing of the props in the composite image of the shooting source; size adjustment processing of the props in the composite image of the shooting source; angle adjustment processing of the props in the composite image of the shooting source ; And performing color adjustment processing on the props in the composite image of the shooting source.
  • FIG. 2 is a schematic diagram of a camera device for automatically matching props according to an embodiment of the present invention.
  • the device may include an acquisition module 201, a selection module 202, and a generation module 203.
  • the obtaining module 201 is configured to obtain a shooting scene including user image information and environment information after starting a camera and entering a state of a screen automatic decoration prop.
  • the user image information obtained by the obtaining module 201 includes any one or a combination of the following: a user's facial features, a user's gesture information, a user's human posture information, and a user's voice characteristics;
  • the environment information includes : Shooting time, shooting location, shooting season, shooting weather, etc., wherein the shooting time includes any of the following: morning, morning, noon, afternoon, evening and late night, etc .;
  • the shooting season includes any of the following: spring, summer , Autumn, and winter;
  • the shooting weather includes any of the following: sunny, wind, cloud, fog, rain, snow, frost, thunder, and hail.
  • the acquisition module 201 may acquire facial features of the user, such as facial expressions, eyesight, and other features, through a facial feature recognition method.
  • a facial feature recognition method such as neural network face recognition method, support vector machine face recognition method and so on.
  • the user gesture information may be obtained through a human body feature recognition method to obtain the user gesture information and the user body posture information, respectively.
  • the user's voice characteristics can be obtained through a sound collection device.
  • the acquisition module 201 may also read the time, place, weather, season and other data during shooting through local or networked channels to obtain corresponding environmental information.
  • the selection module 202 is configured to select prop information that matches the shooting scene from a preset prop library according to the shooting scene.
  • the item information includes any or a combination of the following: animal item information, plant item information, food item information, wearing item information, game tool information, and the like.
  • the selection module 202 includes: an extraction unit for extracting key information of the shooting scene by analyzing user image information and environmental information in the shooting scene, respectively; the selection unit For selecting prop information that matches the shooting key information from a preset prop library according to the shooting key information; wherein the prop library stores a shooting relationship between the shooting key information and the prop information Item relationship table. For example, in a scene where a person walks in a flowered park, the relatively obvious or prominent information extracted from the user image information and the environment information is used as the key information for shooting, such as the face information and the eyes in the user image information at this time.
  • the key shooting information includes posture walking, time morning, location park, etc., select props, garlands, puppies and other props from the prop library, and match the headgear garland, feet on the female user While matching puppies, etc.
  • the key shooting information extracted from the user's image information and environmental information includes face smile, eyes closed, posture lying, etc., then select food from the prop library, or exercise awards and other props.
  • the user if the user changes his or her posture, the user does not perform analysis of the user image information, and the user image information obtained by the motion trajectory of 1-2 seconds before the user's motion stops is taken as the shooting key information.
  • the generating module 203 is configured to superimpose the user image information and the item information to generate a composite image to be captured.
  • An embodiment of the present invention provides a photographing device that automatically matches props.
  • the device includes: a processor and a memory coupled to the processor; and the memory stores a memory operable on the processor.
  • a program for automatically matching the photographing of the props when the program for automatically matching the photographing of the props is executed by the processor, realizing the steps of the method for automatically matching the photographing of the props according to the embodiment of the present invention.
  • a computer storage medium provided by an embodiment of the present invention stores a program for taking pictures of automatic matching props.
  • the program for taking pictures of automatic matching props is executed by a processor, the automatic matching props according to the embodiments of the present invention are implemented. Steps of the method of taking pictures.
  • FIG. 3 is a system block diagram of a camera device for automatically matching props according to an embodiment of the present invention.
  • the system may include a camera unit 301, a user posture analysis unit 302, an environmental data acquisition unit 303, and a data analysis matching unit 304. , Prop library 305 and prop superimposing unit 306.
  • Camera unit 301 Obtain a photo preview and take a photo.
  • User pose analysis unit 302 It is used to analyze the image in the preview image, whether there is a face, a human hand, and a whole body image, and provide key information such as the size, position, and angle of the recognized human face and hand in the preview image to obtain user actions Stop the movement trajectory for 1-2 seconds, and get the key data of user posture definition, as shown in Table 1, to prepare for the next item matching.
  • Environmental data acquisition unit 303 Obtain data such as time, place, weather, season, etc. from the local or networked channels, determine the scene when the user shoots, and prepare for the next item matching, as shown in Table 2.
  • Data analysis and matching unit 304 Obtain data according to the user's pose analysis system and environmental data acquisition system. The data analysis and matching system selects and determines appropriate props from the prop library to match the shooting scene.
  • Item library 305 used to store various items, each item is equipped with the keywords of Table 1, Table 2, and Table 3 provides data analysis and matching system call items.
  • the prop library may be stored locally or on a cloud server (not shown in the figure).
  • Prop overlay unit 306 The user posture analysis system obtains data and scales the props according to the proportion of the human body. Match the key position of the prop to the position of the key point of the human body. For example, the matched prop is a mace. The mace's handle needs to be connected to the user's waving hand, and the angle of the handle matches the user's holding state.
  • the user ’s posture analysis system performs a multi-dimensional analysis method. Important dimensions are derived from the analysis of the preview image and are closely related to people, as shown in Table 1.
  • the auxiliary dimension comes from environmental data collection, as shown in Table 2.
  • the principle of automatic matching props is: the user pose analysis system analyzes from the shooting preview: dimension 1-1: walking or running; dimension 1-4: hand pulling; the data obtained by the environmental data acquisition system is: dimension 2-4 :street.
  • the data analysis and matching system searches for the most matching props from the prop library based on the known dimensional data.
  • the "walking dog" props (with the keywords of "walk, pull, street”) are matched in the preview, as shown in Table 3. Show.
  • the user posture analysis system analyzes the keywords “walking” and “handheld” from the shooting preview.
  • the keywords of environmental data collection system are "rain” and "street”.
  • the data analysis and matching system searches for keywords from the prop library and the dimension attribute table. If there is a match, the position is set to 1, and the unmatched is set to 0, as shown in Table 4.
  • the data analysis and matching system calculates the row data selected by the prop. The larger the value, the higher the matching degree.
  • Item n (dimension 1-1) + (dimension 1-2) + (dimension 1-3) + (dimension 1-4) + (dimension 1-5) + ... + (dimension 2-1) + ( Dimension 2-2) + (Dimension 2-3) + (Dimension 2-4) + ...
  • the data analysis and matching system selects prop 4 (the umbrella, the maximum calculation data is 4). If the user enters the editing state of the prop, the data analysis and matching system recommends the prop 4 (the umbrella), and the prop 1 (the dog walks, the second largest calculation result For 2).
  • voice acquisition data can be added to obtain keywords that the user communicates when taking pictures, such as the user's chat content at the time, the user laughing or the user shouting, and so on.
  • FIG. 4 is a flowchart of a method for photographing an automatic matching prop according to an embodiment of the present invention. As shown in FIG. 4, the method may include:
  • Step 1 Phone camera initialization
  • Step 2 The camera of the mobile phone enters the state of automatically adding prop previews
  • Step 3 The user poses a certain gesture and makes an action
  • Step 4 The user posture analysis system analyzes, obtains the key data of the user's current posture from the camera preview and the user's last action; the environmental data acquisition system collects environmental data;
  • Step 5 The data analysis and matching system selects props that match the current user ’s pose and scene from the prop library based on the data from the user ’s pose analysis system and the environmental data acquisition system;
  • Step 6 Is the item matching completed?
  • step 7 If the item matching is completed, go to step 7, otherwise go back to step 3.
  • the prop overlay system obtains the props from the prop distribution system, and obtains the superimposed position of the props from the image recognition system, as well as the size and angle requirements, and matches the props to the appropriate position.
  • Step 7 The user issues a photo instruction, and the mobile phone camera generates a photo by previewing the content in the picture;
  • Step 8 Are you satisfied with the item matching?
  • step 9 If the user is not satisfied with the matched props, they can enter step 9 to edit the props, adjust the size and position of the props, or re-select the props from the props recommendation library, otherwise end.
  • Step 9 The data analysis and matching system records the user's selection behavior in the item editing state. When there are multiple candidate items, the most commonly used items are recommended first according to the user's habits.
  • the preview interface such as face expressions, gestures, and body changes
  • the preview interface is constantly updated to match the props and the size and position of the props to adapt to the content of the new preview image.
  • this function can also be extended to the fields of pad, digital camera, and the like. Can also be extended to match video props.
  • a female user when a mobile phone camera is activated, a female user can match a tiara wreath on a human head in a scene with flowers. In the shooting environment, you can match the relevant food. When shooting a writing or office environment, you can match a stack of workbooks or documents. In the shooting environment, you can match a dog reading a book seriously.
  • the props in the prop library can be open sourced from the Internet. Users can participate in co-production and design, which can keep up with hot market trends, supplement a large number of popular comics props, and form a good interaction with users.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种自动匹配道具的拍照方法及装置,涉及移动终端技术领域,其方法包括:移动终端在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景;所述移动终端根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息;所述移动终端将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。

Description

一种自动匹配道具的拍照方法及装置
本申请要求享有2018年07月25日提交的名称为“一种自动匹配道具的拍照方法及装置”的中国专利申请CN201810824043.X的优先权,其全部内容通过引用并入本文中。
技术领域
本发明涉及移动终端技术领域,特别涉及一种自动匹配道具的拍照方法及装置。
背景技术
智能手机拍照大头贴在女性用户中得到广泛应用,在自拍照片头像上增加大头帖转发朋友圈,能大大增加趣味性,萌性,转发朋友圈能引起朋友共鸣,引发更多的话题。即在拍照时,用户对拍照内容有添加,装饰,美化的需求。
目前手机的大头贴,主要是用户在拍照预览时选择大头贴,大头贴跟踪人脸位置,拍照时将人脸与大头贴一起合成照片。
但当前大头贴解决方案存在明显不足,例如,大头贴需要用户手动选择(用户换大头贴时,需要从候选库中选取),大头贴无法与用户的表情相结合(用户喜悦与发怒,大头贴不会变化)。大头贴的道具主要是帽子,耳朵,眉毛,胡子,腮红等内容,表现形式有限。
当前技术除拍照手动大头贴外,还没有根据手势动作,身体动作自动选择匹配的相关道具来装饰画面。
发明内容
根据本发明实施例提供的方案解决的技术问题是无法满足用户在拍照时装饰画面的多样性。
根据本发明实施例提供的一种自动匹配道具的拍照方法,包括:
移动终端在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景;
所述移动终端根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息;
所述移动终端将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。
根据本发明实施例提供的一种自动匹配道具的拍照装置,包括:
获取模块,用于在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景;
选取模块,用于根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息;
生成模块,用于将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。
根据本发明实施例提供的一种自动匹配道具的拍照的设备,所述设备包括:处理器,以及与所述处理器耦接的存储器;所述存储器上存储有可在所述处理器上运行的自动匹配道具的拍照的程序,所述自动匹配道具的拍照的程序被所述处理器执行时实现根据本发明实施例提供的所述的自动匹配道具的拍照的方法的步骤。
根据本发明实施例提供的一种计算机存储介质,存储有自动匹配道具的拍照的程序,所述自动匹配道具的拍照的程序被处理器执行时实现根据本发明实施例提供的所述的自动匹配道具的拍照的方法的步骤。
根据本发明实施例提供的方案,用户启动手机相机,选择进入画面自动装饰道具状态,在相机预览画面,用户做出各种姿势,相机获取预览图片,对画面中的人体姿势分析,并在画面上自动加载与当前场景匹配的道具。道具是自动匹配的,用户需要做出某种姿态时,才能匹配到合适的道具,用户想匹配到自己喜欢的道具,需要做出逼真的动作姿势,增加了用户拍摄过程的互动性。自动匹配的道具,很可能超出用户的想象,增加了用户的惊喜感,且避免了用户手动选择的麻烦。
附图说明
图1是本发明实施例提供的一种自动匹配道具的拍照方法流程图;
图2是本发明实施例提供的一种自动匹配道具的拍照装置示意图;
图3是本发明实施例提供的自动匹配道具的拍照的系统框图;
图4是本发明实施例提供的自动匹配道具的拍照的方法流程图。
具体实施方式
以下结合附图对本发明的优选实施例进行详细说明,应当理解,以下所说明的优选实施例仅用于说明和解释本发明,并不用于限定本发明。
图1是本发明实施例提供的一种自动匹配道具的拍照方法流程图,如图1所示,该方法可以包括:
步骤S101:移动终端在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景。
在一实施方式中,所述用户图像信息包括以下任一或组合:用户脸部特征、用户手势信息、用户人体姿势信息以及用户声音特征等;所述环境信息包括:拍摄时间、拍摄地点、拍摄季节以及拍摄天气等;其中,所述拍摄时间包括以下任一:早上、上午、中午、下午、傍晚以及深夜等;所述拍摄季节包括以下任一:春季、夏季、秋季以及冬季等;所述拍摄天气包括以下任一:晴、风、云、雾、雨、雪、霜、雷以及雹等。
步骤S102:所述移动终端根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息。
在一实施方式中,所述移动终端根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息包括:所述移动终端通过分别对所述拍摄场景中的用户图像信息和环境信息进行分析,提取出所述拍摄场景中的拍摄关键信息;所述移动终端根据所述拍摄关键信息,从预置道具库中选取与所述拍摄关键信息相匹配的道具信息;其中,所述道具库中保存有拍摄关键信息与道具信息之间对应关系的拍摄道具关系表。所述道具信息包括以下任一或组合:动物道具信息、植物道具信息、食物道具信息、穿戴道具信息以及游戏工具信息等任意适用的道具信息。
步骤S103:所述移动终端将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。
在另一实施方式中,在所述移动终端生成待拍摄合成图像之后,还包括:所述移动终端接收到用户的拍摄指令时,生成拍摄源合成图像,并接收用户的操作指令;当所述操作指令为图像保存指令时,所述移动终端将所述拍摄源合成图像作为拍摄目的合成图像,并保存所述拍摄目的合成图像;当所述操作指令为图像编辑指令时,所述移动终端 启动图像道具编辑功能,并通过对所述拍摄源合成图像中的道具进行编辑调整,得到并保存拍摄目标合成图像。具体地说,所述移动终端对所述拍摄源合成图像中的道具进行编辑调整包括以下任一或组合:对所述拍摄源合成图像中的道具进行添加处理;对所述拍摄源合成图像中的道具进行删除处理;对所述拍摄源合成图像中的道具进行位置调整处理;对所述拍摄源合成图像中的道具进行大小调整处理;对所述拍摄源合成图像中的道具进行角度调整处理;以及对所述拍摄源合成图像中的道具进行颜色调整处理。
图2是本发明实施例提供的一种自动匹配道具的拍照装置示意图,如图2所示,该装置可以包括:获取模块201,选取模块202以及生成模块203。
所述获取模块201,用于在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景。
在一实施方式中,所述获取模块201所获取的所述用户图像信息包括以下任一或组合:用户脸部特征、用户手势信息、用户人体姿势信息以及用户声音特征等;所述环境信息包括:拍摄时间、拍摄地点、拍摄季节以及拍摄天气等;其中,所述拍摄时间包括以下任一:早上、上午、中午、下午、傍晚以及深夜等;所述拍摄季节包括以下任一:春季、夏季、秋季以及冬季等;所述拍摄天气包括以下任一:晴、风、云、雾、雨、雪、霜、雷以及雹等。
在一实施方式中,所述获取模块201可以通过脸部特征识别方法来获取所述用户脸部特征,例如脸部表情、眼睛视线等特征。比如神经网络的人脸识别方法、支持向量机的人脸识别方法等等。所述用户手势信息可以通过人体特征识别方法来分别获取所述用户手势信息和、所述用户人体姿势信息。所述用户声音特征可以通过声音采集装置来获取。
所述获取模块201还可以通过本地或联网途径来读取拍摄时的时间,地点,天气,季节等数据,以获得对应的环境信息。
所述选取模块202,用于根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息。在一实施方式中,所述道具信息包括以下任一或组合:动物道具信息、植物道具信息、食物道具信息、穿戴道具信息以及游戏工具信息等。
在一实施方式中,所述选取模块202包括:提取单元,用于通过分别对所述拍摄场景中的用户图像信息和环境信息进行分析,提取出所述拍摄场景中的拍摄关键信息;选取单元,用于根据所述拍摄关键信息,从预置道具库中选取与所述拍摄关键信息相匹配 的道具信息;其中,所述道具库中保存有拍摄关键信息与道具信息之间对应关系的拍摄道具关系表。例如在人在有花的公园行走的场景中,从用户图像信息和环境信息中分别提取出的比较明显或突出的信息作为拍摄关键信息,比如此时用户图像信息中的脸部信息以及眼部信息等等是无法获取到的,因此拍摄关键信息包含姿势走着、时间早上、地点公园等等,则从道具库中选取花环、小狗等道具,并在女性用户头上匹配头饰花环,脚边匹配小狗等。又比如,人在草地上躺着的场景,从用户图像信息和环境信息中提取的拍摄关键信息包括脸部微笑、眼睛闭着、姿势躺着等等,则从道具库中选取美食、或运动获奖等道具。
在一实施方式中,用户如果姿势变化的话是不进行用户图像信息分析,将用户动作停止前1-2秒的运动轨迹所获取的用户图像信息作为拍摄关键信息。
所述生成模块203,用于将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。
本发明实施例提供的一种自动匹配道具的拍照的设备,所述设备包括:处理器,以及与所述处理器耦接的存储器;所述存储器上存储有可在所述处理器上运行的自动匹配道具的拍照的程序,所述自动匹配道具的拍照的程序被所述处理器执行时实现根据本发明实施例提供的所述的自动匹配道具的拍照的方法的步骤。
本发明实施例提供的一种计算机存储介质,存储有自动匹配道具的拍照的程序,所述自动匹配道具的拍照的程序被处理器执行时实现根据本发明实施例提供的所述的自动匹配道具的拍照的方法的步骤。
图3是本发明实施例提供的自动匹配道具的拍照装置的系统框图,如图3所示,该系统可以包括照相机单元301,用户姿势分析单元302,环境数据采集单元303,数据分析匹配单元304,道具库305以及道具叠加单元306。
照相机单元301:获取拍照预览图,拍摄照片。
用户姿势分析单元302:用于分析预览图中的图像,是否有人脸人手人全身的图像,同时提供识别出的人脸人手全身在预览图中的大小,位置,角度等关键信息,获取用户动作停止前1-2秒的运动轨迹,得出用户姿态定义关键数据,如表1所示,为下一步道具匹配做准备。
环境数据采集单元303:从本地或联网途径,获取拍摄时的时间,地点,天气,季 节等数据,确定用户拍摄时的场景,为下一步道具匹配做准备,如表2所示。
数据分析匹配单元304:根据用户姿势分析系统获得数据,以及环境数据采集系统获得数据,数据分析匹配系统从道具库中选取确定合适的道具,用于匹配本拍摄场景。
道具库305:用于储备各种道具,每种道具配以表1,表2的关键词,如表3提供数据分析匹配系统调用道具。在一实施方式中,所述道具库可以存储在本地也可以存储在云服务器(图中未示出)上。
道具叠加单元306:用户姿势分析系统获得数据,根据人体比例,缩放道具。将道具关键点位置与人体关键点位置匹配。例如,匹配的道具是狼牙棒,狼牙棒的手柄需与用户挥舞的手相连,手柄的角度与用户手握状态匹配。
本发明实施例为准确清晰地分析用户的姿势,用户姿势分析系统按多维度分析方法进行,重要维度,来自预览图分析,与人密切相关,如表1所示。
维度1-1,人体躯干与四肢姿势;
维度1-2,人头姿势与双目视线;
维度1-3,人脸表情;
维度1-4,人手动作;
维度1-5,人体周边场景。
表1:图像及视频分析数据表
Figure PCTCN2019097622-appb-000001
辅助维度,来自环境数据采集,如表2所示。
维度2-1,时间;
维度2-2,地点;
维度2-3,天气;
维度2-4,室内室外。
表2:环境采集数据表
Figure PCTCN2019097622-appb-000002
表3:道具库与维度属性表1
Figure PCTCN2019097622-appb-000003
自动匹配道具的原理为:用户姿势分析系统从拍摄预览图中分析出:维度1-1:走路、或跑步;维度1-4:手拉;环境数据采集系统获得的数据为:维度2-4:街道。数据分析匹配系统,依据已知维度数据,从道具库中寻找匹配度最高的道具,“遛狗”道具(有“走路,手拉,街道”关键词)匹配在预览图中,如表3所示。
道具自动选择实施例
用户姿势分析系统从拍摄预览图中分析出关键词“走路”“手持”。环境数据采集系统采集数据关键词有“有雨”“街道”。数据分析匹配系统从道具库与维度属性表搜索关键词,如匹配,将该位置置1,未匹配到的置0,如表4所示。
表4:道具库与维度属性表2
Figure PCTCN2019097622-appb-000004
数据分析匹配系统计算道具选中的行数据,数值越大匹配度越高。
道具n=(维度1-1)+(维度1-2)+(维度1-3)+(维度1-4)+(维度1-5)+...+(维度2-1)+(维度2-2)+(维度2-3)+(维度2-4)+...
道具1=1+0+0+0+0 +0+0+0+1=2
道具2=0+0+0+1+0 +0+0+0+0=1
道具3=0+0+0+1+0 +0+0+0+0=1
道具4=1+0+0+1+0 +0+0+1+1=4
根据运算结果,数据分析匹配系统选择道具4(雨伞,运算数据最大为4),如用户进入道具编辑状态,数据分析匹配系统推荐道具4(雨伞),道具1(遛狗,运算结果第二大为2)。
进一步地,可增加语音采集数据,获取用户在拍照时交流的关键词,例如用户当时聊天内容、用户大笑或是用户大喊等等。
图4是本发明实施例提供的自动匹配道具的拍照的方法流程图,如图4所示,该方法可以包括:
步骤1:手机相机初始化;
步骤2:手机相机进入自动添加道具预览图状态;
步骤3:用户摆出某种姿态及做出动作;
步骤4:用户姿势分析系统分析,从相机预览图及用户最后动作,获得用户当前姿势关键数据;环境数据采集系统采集环境数据;
步骤5:数据分析匹配系统依据用户姿势分析系统,环境数据采集系统的数据,从道具库中筛选出与当前用户姿势,场景匹配的道具;
步骤6:道具匹配是否完成?
若道具匹配完成,则进入步骤7,否则返回步骤3。
道具叠加系统,从道具分配系统获得道具,从图像识别系统获得道具的叠加位置,以及大小,角度要求,将道具匹配到合适的位置。
步骤7:用户发出拍照指令,手机相机将预览图中的内容生成照片;
步骤8:道具匹配是否满意?
若用户对匹配的道具不满意,可进入步骤9进行道具编辑状态,调整道具的大小位置,或从道具推荐库中重新选择道具,否则结束。
步骤9:数据分析匹配系统记录用户在道具编辑状态的选择行为,在有多个候选道具时,按用户习惯优先推荐用户最常用的道具。
在预览界面,如脸型表情,手势,身体等发生变化,预览界面不断更新匹配道具及道具大小位置等,以适应新的预览图的内容。
拍照完成后,如用户对自动匹配的道具不满意,可将照片退回到编辑状态,从与该场景匹配的几个道具中选出一个自己喜欢的道具,或对道具调整位置,角度等,再重新保存照片。
本发明实施例除手机相机外,该功能也可扩展到pad,数码相机等领域。也可扩展到视频道具匹配。
例如,启动手机相机,在拍摄有花的场景中,对女性用户在人头上可匹配头饰花环。在拍摄吃饭的环境,可以匹配相关的食物。在拍摄写字或办公的环境,可以匹配一叠作业本或公文。在拍摄读书的环境,可以匹配一条狗在认真地看书。
根据本发明实施例提供的方案,有以下几点有益效果:
1、在手机拍照时,增加相关道具,通过人工智能匹配道具功能,具有大大提高拍照的可玩性。
2、应用人脸道具库,可对人脸的多种表情,如愤怒,喜悦,沉思,张嘴,闭眼等,自动匹配与当前表情符合的道具。可给用户带来惊喜,也减少了用户手动选择大头贴的麻烦。
3、道具库中的道具,可开源来自网络,由用户参与共同制作设计,能紧跟市场热点,补充大量流行的漫画道具,形成与用户良好的互动。
尽管上文对本发明进行了详细说明,但是本发明不限于此,本技术领域技术人员可以根据本发明的原理进行各种修改。因此,凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。

Claims (10)

  1. 一种自动匹配道具的拍照方法,其中,包括:
    移动终端在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景;
    所述移动终端根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息;
    所述移动终端将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。
  2. 根据权利要求1所述的方法,其中,所述用户图像信息包括以下任一或组合:用户脸部特征、用户手势信息、用户人体姿势信息以及用户声音特征;所述环境信息包括:拍摄时间、拍摄地点、拍摄季节以及拍摄天气;所述道具信息包括以下任一或组合:动物道具信息、植物道具信息、食物道具信息、穿戴道具信息以及游戏工具信息。
  3. 根据权利要求2所述的方法,其中,所述移动终端根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息包括:
    所述移动终端通过分别对所述拍摄场景中的用户图像信息和环境信息进行分析,提取出所述拍摄场景中的拍摄关键信息;
    所述移动终端根据所述拍摄关键信息,从预置道具库中选取与所述拍摄关键信息相匹配的道具信息;
    其中,所述预置道具库中保存有拍摄关键信息与道具信息之间对应关系的拍摄道具关系表。
  4. 根据权利要求1所述的方法,其中,在所述移动终端生成待拍摄合成图像之后,还包括:
    所述移动终端接收到用户的拍摄指令时,生成拍摄源合成图像,并接收用户的操作指令;
    当所述操作指令为图像保存指令时,所述移动终端将所述拍摄源合成图像作为拍摄目的合成图像,并保存所述拍摄目的合成图像;
    当所述操作指令为图像编辑指令时,所述移动终端启动图像道具编辑功能,并通过对所述拍摄源合成图像中的道具进行编辑调整,得到并保存拍摄目标合成图像。
  5. 根据权利要求4所述的方法,其中,所述移动终端对所述拍摄源合成图像中的道 具进行编辑调整包括以下任一或组合:
    对所述拍摄源合成图像中的道具进行添加处理;
    对所述拍摄源合成图像中的道具进行删除处理;
    对所述拍摄源合成图像中的道具进行位置调整处理;
    对所述拍摄源合成图像中的道具进行大小调整处理;
    对所述拍摄源合成图像中的道具进行角度调整处理;以及
    对所述拍摄源合成图像中的道具进行颜色调整处理。
  6. 一种自动匹配道具的拍照装置,其中,包括:
    获取模块,用于在启动相机且进入画面自动装饰道具状态后,获取包含用户图像信息和环境信息的拍摄场景;
    选取模块,用于根据所述拍摄场景,从预置道具库中选取与所述拍摄场景相匹配的道具信息;
    生成模块,用于将所述用户图像信息与所述道具信息进行叠加处理,生成待拍摄合成图像。
  7. 根据权利要求6所述的装置,其中,所述用户图像信息包括以下任一或组合:用户脸部特征、用户手势信息、用户人体姿势信息以及用户声音特征;所述环境信息包括:拍摄时间、拍摄地点、拍摄季节以及拍摄天气;所述道具信息包括以下任一或组合:动物道具信息、植物道具信息、食物道具信息、穿戴道具信息以及游戏工具信息。
  8. 根据权利要求7所述的装置,其中,所述选取模块包括:
    提取单元,用于通过分别对所述拍摄场景中的用户图像信息和环境信息进行分析,提取出所述拍摄场景中的拍摄关键信息;
    选取单元,用于根据所述拍摄关键信息,从预置道具库中选取与所述拍摄关键信息相匹配的道具信息;
    其中,所述道具库中保存有拍摄关键信息与道具信息之间对应关系的拍摄道具关系表。
  9. 一种自动匹配道具的拍照的设备,其中,所述设备包括:处理器,以及与所述处理器耦接的存储器;所述存储器上存储有可在所述处理器上运行的自动匹配道具的拍照的程序,所述自动匹配道具的拍照的程序被所述处理器执行时实现如权利要求1至5中任 一项所述的自动匹配道具的拍照的方法的步骤。
  10. 一种计算机存储介质,其中,所述计算机存储介质存储有自动匹配道具的拍照的程序,所述自动匹配道具的拍照的程序被处理器执行时实现如权利要求1至5中任一项所述的自动匹配道具的拍照的方法的步骤。
PCT/CN2019/097622 2018-07-25 2019-07-25 一种自动匹配道具的拍照方法及装置 WO2020020269A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810824043.XA CN110766602A (zh) 2018-07-25 2018-07-25 一种自动匹配道具的拍照方法及装置
CN201810824043.X 2018-07-25

Publications (1)

Publication Number Publication Date
WO2020020269A1 true WO2020020269A1 (zh) 2020-01-30

Family

ID=69181346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/097622 WO2020020269A1 (zh) 2018-07-25 2019-07-25 一种自动匹配道具的拍照方法及装置

Country Status (2)

Country Link
CN (1) CN110766602A (zh)
WO (1) WO2020020269A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019401B (zh) * 2022-08-05 2022-11-11 上海英立视电子有限公司 一种基于图像匹配的道具生成方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184165A (zh) * 2006-11-14 2008-05-21 株式会社万代南梦宫游戏 程序、信息存储载体、照片印刷装置及照片印刷方法
US20130039409A1 (en) * 2011-08-08 2013-02-14 Puneet Gupta System and method for virtualization of ambient environments in live video streaming
CN103024167A (zh) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 一种移动终端拍照方法及系统
CN105427280A (zh) * 2015-11-03 2016-03-23 上海卓悠网络科技有限公司 拍照方法及系统
CN107562200A (zh) * 2017-09-01 2018-01-09 广州励丰文化科技股份有限公司 一种基于实景道具的vr画面控制方法及系统
CN107734142A (zh) * 2017-09-15 2018-02-23 维沃移动通信有限公司 一种拍照方法、移动终端及服务器

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184165A (zh) * 2006-11-14 2008-05-21 株式会社万代南梦宫游戏 程序、信息存储载体、照片印刷装置及照片印刷方法
US20130039409A1 (en) * 2011-08-08 2013-02-14 Puneet Gupta System and method for virtualization of ambient environments in live video streaming
CN103024167A (zh) * 2012-12-07 2013-04-03 广东欧珀移动通信有限公司 一种移动终端拍照方法及系统
CN105427280A (zh) * 2015-11-03 2016-03-23 上海卓悠网络科技有限公司 拍照方法及系统
CN107562200A (zh) * 2017-09-01 2018-01-09 广州励丰文化科技股份有限公司 一种基于实景道具的vr画面控制方法及系统
CN107734142A (zh) * 2017-09-15 2018-02-23 维沃移动通信有限公司 一种拍照方法、移动终端及服务器

Also Published As

Publication number Publication date
CN110766602A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
US11533456B2 (en) Group display system
US8390648B2 (en) Display system for personalized consumer goods
US9319640B2 (en) Camera and display system interactivity
US9253447B2 (en) Method for group interactivity
US8849043B2 (en) System for matching artistic attributes of secondary image and template to a primary image
US8538986B2 (en) System for coordinating user images in an artistic design
US8237819B2 (en) Image capture method with artistic template design
US8854395B2 (en) Method for producing artistic image template designs
US8212834B2 (en) Artistic digital template for image display
US9633462B2 (en) Providing pre-edits for photos
US8849853B2 (en) Method for matching artistic attributes of a template and secondary images to a primary image
US20110029635A1 (en) Image capture device with artistic template design
US8289340B2 (en) Method of making an artistic digital template for image display
WO2022042776A1 (zh) 一种拍摄方法及终端
US20110157218A1 (en) Method for interactive display
US8345057B2 (en) Context coordination for an artistic digital template for image display
US20110025709A1 (en) Processing digital templates for image display
US20110029914A1 (en) Apparatus for generating artistic image template designs
US20110029860A1 (en) Artistic digital template for image display
US20110029562A1 (en) Coordinating user images in an artistic design
CN105827930A (zh) 一种辅助拍照的方法和装置
WO2020020269A1 (zh) 一种自动匹配道具的拍照方法及装置
US8994834B2 (en) Capturing photos
CN114500833A (zh) 拍摄方法、装置及电子设备
WO2019090603A1 (zh) 一种基于拍照功能的表情的添加方法及添加装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19841652

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19841652

Country of ref document: EP

Kind code of ref document: A1