WO2016054918A1 - 一种图像处理方法、装置及存储介质 - Google Patents

一种图像处理方法、装置及存储介质 Download PDF

Info

Publication number
WO2016054918A1
WO2016054918A1 PCT/CN2015/079112 CN2015079112W WO2016054918A1 WO 2016054918 A1 WO2016054918 A1 WO 2016054918A1 CN 2015079112 W CN2015079112 W CN 2015079112W WO 2016054918 A1 WO2016054918 A1 WO 2016054918A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
detection object
sample
detection
features
Prior art date
Application number
PCT/CN2015/079112
Other languages
English (en)
French (fr)
Inventor
彭和清
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016054918A1 publication Critical patent/WO2016054918A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to image processing related technologies, and in particular, to an image processing method, apparatus, and storage medium.
  • embodiments of the present invention are directed to providing an image processing method, apparatus, and storage medium, which can improve the comprehensive utilization value of image data, and can make the interaction between users more targeted.
  • An embodiment of the present invention provides an image processing method, where the method includes:
  • the method before performing the object attribute analysis on the sample photo of the detection object, the method further includes:
  • the sample image is identified by the detection object, and the sample photos are classified according to the different detection objects identified.
  • the method before the classifying the sample photos according to the identified different detection objects, the method further includes: parsing the sample photos, obtaining shooting time information and shooting location information of all the sample photos, and according to the shooting time and shooting The proximity of the location classifies the sample photos.
  • the acquiring different object feature sequences corresponding to the detection object according to different reference features includes:
  • the obtaining the identification feature of the detection object according to the attribute feature, the feature vector, and the comprehensive feature vector includes:
  • the outputting the multimedia information corresponding to the detection object according to the identifier feature includes:
  • An embodiment of the present invention further provides an image processing apparatus, where the apparatus includes: an acquisition module, a processing module, and an output module;
  • the acquiring module is configured to perform object attribute analysis on the sample photo of the detection object to obtain an attribute feature of the detection object;
  • the processing module is configured to acquire different object feature sequences corresponding to the detection object according to different reference features, and perform weighting processing on the obtained object feature sequences to obtain feature vectors and integrated feature vectors of the detection object;
  • the output module is configured to obtain an identification feature of the detection object according to the attribute feature, the feature vector, and the integrated feature vector, and output multimedia information corresponding to the detection object according to the identifier feature.
  • the device further includes: a classification module configured to perform detection object identification on the sample photo, and classify the sample photos according to the identified different detection objects.
  • the device further includes: a pre-processing module configured to parse the sample photo, obtain shooting time information and shooting location information of all sample photos, and classify the sample photos according to the shooting time and the proximity of the shooting location.
  • a pre-processing module configured to parse the sample photo, obtain shooting time information and shooting location information of all sample photos, and classify the sample photos according to the shooting time and the proximity of the shooting location.
  • the processing module is configured to identify different reference features of the sample photo of the detection object, obtain corresponding object features according to the identified reference features, and obtain different reference features of all sample photos obtained. Corresponding object features are sorted to obtain different object feature sequences corresponding to the detected objects by different reference features.
  • the output module is configured to match the attribute feature, the feature vector, and the integrated feature vector with the identification feature model in the preset feature database to obtain the identification feature of the detection object.
  • the output module is configured to match the identification feature information of the detection object with the object interaction model in the preset feature database, obtain an interaction type of the detection object, and output a corresponding according to the interaction type.
  • the multimedia information of the detection object is configured to match the identification feature information of the detection object with the object interaction model in the preset feature database, obtain an interaction type of the detection object, and output a corresponding according to the interaction type.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program for performing the image processing method of the embodiment of the present invention.
  • the image processing method, the device and the storage medium provided by the embodiment of the present invention perform object attribute analysis on the sample photo of the detection object to obtain the attribute feature of the detection object; and acquire different corresponding to the detection object according to different reference features.
  • the comprehensive utilization value of the image data can be improved; when the detection object is a human object, the multimedia information corresponding to the detection object is finally output based on the identification feature and the feature vector of the detection object, and the output multimedia information is more in line with the characteristics of the detection object and Preferences, therefore, can make the user's interaction with the detection object more targeted, thereby improving the user's interpersonal interaction ability and user experience.
  • FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of a method for preprocessing a sample photo according to an embodiment of the present invention
  • FIG. 3 is a flow chart showing a method for performing facial expression emotion feature recognition on a sample photo according to an embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a method for performing action emotion feature recognition on a sample photo according to an embodiment of the present invention
  • FIG. 5 is a schematic flow chart of a method for performing interpersonal interaction emotion feature recognition on a sample photo according to an embodiment of the present invention
  • FIG. 6 is a schematic flow chart of a method for performing historical weather sentiment feature recognition on a sample photo according to an embodiment of the present invention
  • FIG. 7 is a diagram showing the background feature specific emotion recognition of a sample photo according to an embodiment of the present invention. Schematic diagram of the method flow;
  • FIG. 8 is a schematic flowchart of an image processing method according to Embodiment 2 of the present invention.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • object attribute analysis is performed on the sample photo of the detection object, and the attribute feature of the detection object is obtained; different object feature sequences corresponding to the detection object are acquired according to different reference features, and the obtained object is obtained. And performing a weighting process on the feature sequence to obtain a feature vector and a comprehensive feature vector of the detection object; obtaining an identification feature of the detection object according to the attribute feature, the feature vector, and the integrated feature vector, and outputting a corresponding feature according to the identifier feature The multimedia information of the detection object.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention. As shown in FIG. 1, the image processing method in this embodiment includes:
  • Step 101 performing object attribute analysis on a sample photo of the detection object, and obtaining an attribute feature of the detection object;
  • the method further includes: performing detection object identification on the sample photo, and classifying the sample photo according to the identified different detection objects;
  • the detection object may be a human object; and the detecting object identification of the sample photo includes: performing face recognition on the sample photo.
  • the method further comprises: pre-processing the sample photo according to the different detected detection objects
  • FIG. 2 is a method for pre-processing the sample photo according to the embodiment of the present invention
  • Step 2a whether the sample file is a sample photo or a sample image file, if it is a sample photo, step 2b; if it is a sample image file, step 2c;
  • whether the identification sample file is a sample photo or a sample image file includes: The format of this document identifies that the sample file is a sample photo or a sample image file; for example, a sample photo with a file format of .jpg/jpeg and a sample image file with a file format of .mp4.
  • Step 2b parsing the sample photo, obtaining the shooting time information and the shooting location information of all the sample photos, and classifying the sample photos according to the shooting time and the proximity of the shooting location, and ending the processing flow;
  • the principle of the shooting time and the location of the shooting location includes: the principle that the shooting time is close and the shooting location is close; wherein, the shooting time is closest to the shooting time of the photo, and the shooting location is the closest to the photo shooting location; For example, the sample photos with the shooting time of September 10, 2014 and the shooting locations are Tiananmen are divided into one category.
  • Step 2c The sample image file is intercepted into a plurality of sample photos, and step 2b is performed;
  • the intercepting the sample image file into a plurality of sample photos includes: capturing, by the motion image prediction algorithm, the sample image file as a plurality of sample photos according to the presence of the detection object.
  • the object attribute analysis is performed on the sample photo of the detection object, and obtaining the attribute characteristics of the detection object includes:
  • the object attributes may include: wrinkles, body, skin color, and the like;
  • the attribute characteristics may include: gender, age, height, weight, and the like.
  • Step 102 Obtain different object feature sequences corresponding to the detection object according to different reference features, and perform weighting processing on the obtained object feature sequences to obtain feature vectors and integrated feature vectors of the detection object.
  • the acquiring different object feature sequences corresponding to the detection object according to different reference features includes:
  • the acquiring different object feature sequences corresponding to the detection object according to different reference features includes:
  • the sample photo is identified by the Mth reference feature, and the obtained first reference feature is matched with the Mth reference feature information in the preset feature database to obtain the detection corresponding to the Mth reference feature.
  • An object feature of the object sorting all the object features of the detection object by time, and obtaining an Mth object feature sequence of the detection object; wherein the M is a positive integer;
  • the value of the M may be set according to the actual situation, and when the value of the M is greater than 2, when the method of the embodiment of the present invention is applied, any one or more of the M reference features may be used to implement .
  • the value of the M may be 5; the first reference feature may be a facial expression feature; the second reference feature may be an action feature; and the third reference feature may be a human feature An interactive feature; the fourth reference feature may be a weather condition feature;
  • the reference feature can be a background specific object feature;
  • FIG. 3 is a schematic flowchart of a method for performing facial expression emotion feature recognition on a sample photo according to an embodiment of the present invention. As shown in FIG. 3, the method for performing facial expression emotion feature recognition on a sample photo in this embodiment includes:
  • Step 3a performing facial expression recognition on a sample photo of the person object
  • the facial expression features include: smile, laughter, face, and the like.
  • Step 3b matching the obtained facial expression feature with the facial expression feature information in the preset feature database to obtain a facial expression emotional feature of the human object corresponding to the facial expression feature;
  • the emotional features include: joy, anger, sadness, music, and the like.
  • Step 3c Sort the facial expression sentiment features of all the sample photos of the obtained human object by time, and obtain a facial expression emotional feature sequence of the human object.
  • FIG. 4 is a schematic flowchart of a method for performing action emotion feature recognition on a sample photo according to an embodiment of the present invention. As shown in FIG. 4, the method for performing action emotion feature recognition on a sample photo in this embodiment includes:
  • Step 4a performing motion feature recognition on a sample photo of the person object
  • the action features include: jumping, pendulum V, and the like.
  • Step 4b Matching the obtained action feature with the action feature information in the preset feature database to obtain an action emotion feature of the character object corresponding to the action feature;
  • the action emotion features include: lively, active, calm, steady, and the like.
  • Step 4c Sort the action emotion features of all the sample photos of the obtained person object by time, and obtain the action emotion feature sequence of the person object.
  • FIG. 5 is a schematic flowchart of a method for performing interpersonal interaction emotion feature recognition on a sample photo according to an embodiment of the present invention. As shown in FIG. 5, the method for performing interpersonal interaction emotion feature recognition on a sample photo in this embodiment includes:
  • Step 5a performing interpersonal interaction feature recognition on the sample photos of the person object
  • the interpersonal interaction feature includes: an interaction relationship and a positional relationship between the character object and other character objects in the sample photo, such as: hugging with a certain person object or being far apart from other person objects.
  • Step 5b matching the obtained interpersonal interaction feature with the interpersonal interaction feature information in the preset feature database, and obtaining the interpersonal interaction sentiment feature of the character object corresponding to the interpersonal interaction feature;
  • the emotional characteristics of the interpersonal interaction include: enthusiasm, affinity, indifference, and the like.
  • Step 5c Sort the emotional features of all the sample photos of the obtained human object in time, and obtain a sequence of interpersonal interactive emotional features of the human object.
  • FIG. 6 is a schematic flowchart of a method for performing historical weather sentiment feature recognition on a sample photo according to an embodiment of the present invention. As shown in FIG. 6 , the method for performing historical weather sentiment feature recognition on a sample photo in this embodiment includes:
  • Step 6a Identify weather feature characteristics of sample photos of the person object
  • the weather conditions described herein include: sunny, cloudy, rainy, and the like.
  • Step 6b Matching the obtained weather condition feature with the weather condition feature information in the preset feature database to obtain a weather condition emotional feature of the character object corresponding to the weather condition feature;
  • the emotional characteristics of the weather conditions include: cheerfulness, melancholy, romance, and the like.
  • Step 6c Sort the weather condition sentiment features of all the sample photos of the obtained person object by time, and obtain a weather condition emotional feature sequence of the person object.
  • FIG. 7 is a schematic flowchart of a method for performing background object specific emotion feature recognition on a sample photo according to an embodiment of the present invention. As shown in FIG. 7 , the method for performing background object specific emotion feature recognition on a sample photo in this embodiment includes:
  • Step 7a Identify a background specific object of a sample photo of the person object
  • the background specific object includes: flowers, historical humanities, and the like.
  • Step 7b matching the obtained background specific object feature with the background specific object feature information in the preset feature database, and obtaining the background specific object emotional feature of the character object corresponding to the specific object feature;
  • the background specific object emotional features include: love travel, love history, and the like.
  • Step 7c Sort the emotional features of all the sample photos of the obtained human object by time, and obtain a background specific object emotional feature sequence of the human object.
  • And performing weighting processing on the M object feature sequences, and acquiring the feature vector and the integrated feature vector of the detection object includes:
  • the weighting parameter in the weighting process may be set according to actual conditions
  • the feature vector is a feature vector of the detected object at a specific time, such as an emotion vector at a specific time.
  • the feature database in the embodiment of the present invention is a database preset according to empirical statistics.
  • the feature vector may be an index of a personality orientation of the character object at a specific time, and the index range is (1, 10), and the comprehensive feature vector may be a personality orientation of the overall character object; For example, the cheerfulness index of a character object at 8 o'clock in the morning is 8, and the overall cheerfulness index, that is, the comprehensive feature vector is 5.
  • Step 103 Obtain an identification feature of the detection object according to the attribute feature, the feature vector, and a comprehensive feature vector, and output multimedia information corresponding to the detection object according to the identification feature;
  • the identification features of the detected object include:
  • And outputting the multimedia information corresponding to the detection object according to the identifier feature includes:
  • the identification feature may be a personality feature;
  • the interaction type includes: an interpersonal orientation of the detection object within a specific time period; for example, a person object is compared between 8 and 10 o'clock in the evening. Open, willing to interact with people;
  • the multimedia information includes: text, pictures, music, images, and the like.
  • FIG. 8 is a schematic flowchart of an image processing method according to Embodiment 2 of the present invention. As shown in FIG. 8, the image processing method in this embodiment includes:
  • Step 801 Identify whether the sample file is a sample photo or a sample image file, if it is a sample photo execution step 802; if it is a sample image file, perform step 807;
  • the step includes: identifying, by the format of the sample file, that the sample file is a sample photo or a sample image file; for example, a sample photo with a file format of .jpg/jpeg and a sample image file with a file format of .mp4.
  • Step 802 classify the sample photos according to the shooting time and the proximity of the shooting location
  • the step includes: analyzing the sample photo, obtaining the shooting time information and the shooting location information of all the sample photos, and classifying the sample photos according to the shooting time and the proximity of the shooting location;
  • the principle of the shooting time and the location of the shooting location includes: the principle that the shooting time is close and the shooting location is close; wherein, the shooting time is closest to the shooting time of the photo, and the shooting location is the closest to the photo shooting location; For example, the sample photos with the shooting time of September 10, 2014 and the shooting locations are Tiananmen are divided into one category.
  • Step 803 classify the sample photos according to different detection objects, and perform object attribute analysis on the detection object according to the classification result, and obtain attribute features of the detection object;
  • the detection object is a person object
  • the object attributes may include: wrinkles, body, skin color, etc.
  • the attribute features may include: gender, age, height, weight, etc.
  • the classifying the sample photos according to the different detection objects includes: identifying the detection objects of the sample photos, and classifying the sample photos according to the different detection objects identified; that is, identifying the different person objects of the sample photos, and Sorting the sample photos according to the identified different person objects; including: identifying all faces in the sample photos by the face recognition algorithm, and classifying the sample photos according to the identified different person objects;
  • Performing object attribute analysis on the detection object according to the classification result, and obtaining the attribute characteristics of the detection object includes:
  • the object attribute identification is performed on the sample photos of different detection objects, and the identified object attributes are compared with the comparison standard reference objects in the preset database, and the attribute features of the detection object are obtained.
  • Step 804 Acquire corresponding M object feature sequences according to the M reference features of the detection object.
  • the step includes: identifying the M reference features of the sample photo of the detection object, and obtaining the object features corresponding to the reference features according to the identified M reference features, and M references of all the obtained sample photos. Sorting the object features corresponding to the features, and obtaining the M object feature sequences corresponding to the M objects by the M reference features;
  • This step includes:
  • the sample photo is identified by the Mth reference feature, and the obtained first reference feature is matched with the Mth reference feature information in the preset feature database to obtain the detection corresponding to the Mth reference feature.
  • An object feature of the object sorting all the object features of the detection object by time, and obtaining an Mth object feature sequence of the detection object; wherein the M is a positive integer;
  • the value of the M may be set according to the actual situation. In the embodiment of the present invention, the value of the M is 5;
  • the first reference feature is a facial expression feature; the second reference feature is an action feature; the third reference feature is a human interaction feature; the fourth reference feature is a weather condition feature; and the fifth reference feature is a background specific feature Object feature
  • acquiring the corresponding object feature sequence according to the reference feature of the detection object includes:
  • acquiring the corresponding object feature sequence according to the reference feature of the detection object includes:
  • acquiring the corresponding object feature sequence according to the reference feature of the detection object includes:
  • Inter-personal interaction feature recognition for a sample photo of a person object that is, identifying the interaction relationship and positional relationship between a certain person object and other character objects in the sample photo, such as: hugging with a certain person object or being far apart from other character objects,
  • the interpersonal interaction feature is matched with the interpersonal interaction feature information in the preset feature database, and the emotional features of the character object corresponding to the interpersonal interaction feature are obtained, such as: enthusiasm, rich affinity, indifference, etc.; Emotional features of all sample photos are sorted by time to obtain a third emotional feature sequence of the character object;
  • acquiring the corresponding object feature sequence according to the reference feature of the detection object includes:
  • Identifying the weather conditions of the sample photos of a certain person object such as sunny, cloudy, rainy days, etc.
  • matching the obtained weather condition features with the weather condition feature information in the preset feature database to obtain the corresponding weather condition characteristics.
  • the emotional features of the character object such as: cheerful, melancholy, romantic, etc., sorting the emotional features of all sample photos of the obtained human object by time, and obtaining a fourth emotional feature sequence of the character object;
  • acquiring the corresponding object feature sequence according to the reference feature of the detection object includes:
  • Identifying a background specific object of a sample photo of a human object such as a background, a flower, a historical humanistic monument, etc., matching the obtained background specific object feature with the background specific object feature information in the preset feature database to obtain the specific Emotional features of the character object corresponding to the object feature, such as: love travel, love history, etc., all samples of the obtained character object
  • the emotional features of the photo are sorted by time to obtain a fifth emotional feature sequence of the human object.
  • Step 805 Perform weighting processing on the M object feature sequences, and acquire feature vectors and integrated feature vectors of the detection object.
  • the feature vector may be an index of a person object's orientation at a specific time, and the index range is (1, 10). For example, a cheerfulness index of a character object at 8 o'clock in the morning is 8, and the overall cheerfulness index is a comprehensive feature vector. 5.
  • the processing process includes: performing weighting processing on the five object feature sequences to obtain an emotional feature graph of the character object, and obtaining the personality of the character object at a specific time according to the emotional feature graph.
  • the index of the orientation matches the index of the personality orientation at the specific moment with the comprehensive lattice orientation model in the feature database to obtain a comprehensive personality orientation index of the character object.
  • the feature database in the embodiment of the present invention is a database preset according to empirical statistics, and each model in the database is a data model preset according to empirical statistics.
  • Step 806 The identification feature of the detection object is obtained according to the attribute feature, the feature vector, and the integrated feature vector, and the multimedia information corresponding to the detection object is output according to the identification feature, and step 808 is performed;
  • the identification feature is a character feature
  • the multimedia information includes: text, a picture, music, an image, and the like
  • the obtaining the identification feature of the detection object according to the attribute feature, the feature vector, and the comprehensive feature vector includes:
  • Matching the attribute feature, the feature vector and the comprehensive feature vector with the identification feature model in the feature database to obtain the identification feature of the detection object comprising: the character feature to be obtained, the personality orientation index at a specific time, and the comprehensive personality
  • the orientation index is matched with the personality feature model in the feature database to obtain the personality characteristics of the character object;
  • And outputting the multimedia information corresponding to the detection object according to the identifier feature includes:
  • the interaction type includes: the interpersonal orientation of the detection object in a specific time period; for example, a person object is relatively open between 8:00 and 10:00 pm, and is willing to interact with other people.
  • Step 807 The sample image file is intercepted as a number of sample photos, and step 802 is performed;
  • the step includes: capturing, by the motion image prediction algorithm, the sample image file as a plurality of sample photos according to the presence of the detection object.
  • Step 808 End the processing flow.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus of the embodiment of the present invention comprises: an obtaining module 91, a processing module 92, and an output module 93;
  • the obtaining module 91 is configured to perform object attribute analysis on the sample photo of the detection object to obtain an attribute feature of the detection object;
  • the processing module 92 is configured to acquire different target feature sequences of the detected object according to different reference features, and perform weighting processing on the obtained object feature sequence to obtain a feature vector and a comprehensive feature vector of the detected object. ;
  • the output module 93 is configured to obtain the identification feature of the detection object according to the attribute feature, the feature vector, and the integrated feature vector, and output multimedia information corresponding to the detection object according to the identification feature.
  • the apparatus further includes: a classification module 94 configured to perform detection object recognition on the sample photo, and classify the sample photos according to the identified different detection objects.
  • a classification module 94 configured to perform detection object recognition on the sample photo, and classify the sample photos according to the identified different detection objects.
  • the device further includes: a pre-processing module 95 configured to parse the sample photo, obtain shooting time information and shooting location information of all sample photos, and perform sample photos according to the shooting time and the proximity of the shooting location. classification;
  • the principle of the shooting time and the location of the shooting location includes: the principle that the shooting time is close and the shooting location is close;
  • the pre-processing module 95 is further configured to identify whether the sample file is a sample photo or a sample image file, and if it is a sample image file, the sample image file is intercepted into a plurality of sample photos;
  • the identifying the sample file is a sample photo or a sample image file comprises: identifying, by the format of the sample file, the sample file is a sample photo or a sample image file; if the file format is .jpg/jieg as a sample photo, the file format is. Mp4 is a sample image file;
  • the pre-processing module 95 intercepts the sample image file into a plurality of sample photos, and the pre-processing module 95 intercepts the sample image file into a plurality of sample photos according to the presence of the detection object by using a motion image prediction algorithm.
  • the processing module 92 acquires different object feature sequences corresponding to the detection object according to different reference features, and the processing module 92 performs M reference feature identification on the sample photo of the detection object. And obtaining object features corresponding to the reference features according to the identified M reference features, and sorting the object features corresponding to the M reference features of all the obtained sample photos, and obtaining the M reference features corresponding to the detecting M object feature sequences of the object; including:
  • the sample photo is identified by the Mth reference feature, and the obtained first reference feature is matched with the Mth reference feature information in the preset feature database to obtain the detection corresponding to the Mth reference feature.
  • An object feature of the object sorting all the object features of the detection object by time, and obtaining an Mth object feature sequence of the detection object; wherein the M is a positive integer;
  • the output module 93 obtains the identification features of the detection object according to the attribute feature, the feature vector, and the integrated feature vector, including:
  • the output module 93 outputs multimedia information corresponding to the detection object according to the identification feature, and the output module 93 matches the identification feature information with an object interaction model in a preset feature database. Obtaining an interaction type of the detection object, and outputting multimedia information corresponding to the detection object according to the interaction type.
  • the obtaining module, the processing module, the output module, the classification module and the pre-processing module may be implemented by a processor, and may also be implemented by a specific logic circuit; wherein the processor may be a mobile terminal or The processor on the server, in the actual application, at The processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • the image processing method described above is implemented in the form of a software function module and sold or used as a stand-alone product, it may also be stored in a computer readable storage medium.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a computer program configured to execute the image processing method of the embodiment of the present invention.

Abstract

本发明公开了一种图像处理方法,对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。本发明还同时公开了一种图像处理装置及存储介质。

Description

一种图像处理方法、装置及存储介质 技术领域
本发明涉及图像处理相关技术,尤其涉及一种图像处理方法、装置及存储介质。
背景技术
随着人们日益丰富的文化需要和智能影像技术的发展,个人和家庭拥有的影像拍摄终端越来越多,朋友间、家庭范围内所拥有的视频、照片数量也越来越多,因此,对照片、视频内人物进行人物特征分析,并对分析结果进行合理运用也显得越来越重要。
目前,尚不存在一种图像处理方法,能够对图像人物特征进行分析,并依据分析结果推送给用户适合所述图像人物的特定文字、音乐等多媒体信息。
发明内容
有鉴于此,本发明实施例期望提供一种图像处理方法、装置及存储介质,能够提升图像资料的综合利用价值,能使用户之间的交互更有针对性。
为达到上述目的,本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种图像处理方法,所述方法包括:
对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;
依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;
依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。
上述方案中,所述对检测对象的样本照片进行对象属性分析之前,所述方法还包括:
对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类。
上述方案中,所述依据识别出的不同的检测对象对样本照片进行分类之前,所述方法还包括:解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类。
上述方案中,所述依据不同的参考特征获取对应所述检测对象的不同的对象特征序列包括:
对所述检测对象的样本照片进行不同参考特征的识别,依据识别出的参考特征获得对应的对象特征,对获得的所有样本照片的不同的参考特征对应的对象特征进行排序,获得不同参考特征对应所述检测对象的不同的对象特征序列。
上述方案中,所述依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征包括:
将所述属性特征、所述特征向量和综合特征向量与预设特征数据库中的标识特征模型进行匹配,获得所述检测对象的标识特征。
上述方案中,所述依据所述标识特征输出对应所述检测对象的多媒体信息包括:
将所述检测对象的标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出对应所述检测对象的多媒体信息。
本发明实施例还提供了一种图像处理装置,所述装置包括:获取模块、处理模块及输出模块;其中,
所述获取模块,配置为对所述检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;
所述处理模块,配置为依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;
所述输出模块,配置为依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。
上述方案中,所述装置还包括:分类模块,配置为对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类。
上述方案中,所述装置还包括:预处理模块,配置为解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类。
上述方案中,所述处理模块,配置为对所述检测对象的样本照片进行不同参考特征的识别,依据识别出的参考特征获得对应的对象特征,并对获得的所有样本照片的不同的参考特征对应的对象特征进行排序,获得不同参考特征对应所述检测对象的不同的对象特征序列。
上述方案中,所述输出模块,配置为将所述属性特征、所述特征向量和综合特征向量与预设特征数据库中的标识特征模型进行匹配,获得所述检测对象的标识特征。
上述方案中,所述输出模块,配置为将所述检测对象的标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出对应所述检测对象的多媒体信息。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序用于执行本发明实施例的上述图像处理方法。
本发明实施例所提供的图像处理方法、装置及存储介质,对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。如此,能够提升图像资料的综合利用价值;当检测对象为人物对象时,基于检测对象的标识特征以及特征向量等最终输出对应检测对象的多媒体信息,所输出的多媒体信息更符合检测对象的特点及喜好,因此,能使用户与检测对象的交互更有针对性,从而提升用户的人际交互能力及用户体验感。
附图说明
图1为本发明实施例一图像处理方法流程示意图;
图2所示为本发明实施例对样本照片进行预处理的方法流程示意图;
图3所示为本发明实施例对样本照片进行面部表情情感特征识别的方法流程示意图;
图4所示为本发明实施例对样本照片进行动作情感特征识别的方法流程示意图;
图5所示为本发明实施例对样本照片进行人际交互情感特征识别的方法流程示意图;
图6所示为本发明实施例对样本照片进行历史天气情感特征识别的方法流程示意图;
图7所示为本发明实施例对样本照片进行背景特定物体情感特征识别 的方法流程示意图;
图8为本发明实施例二图像处理方法流程示意图;
图9为本发明实施例图像处理装置组成结构示意图。
具体实施方式
在本发明实施例中,对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。
图1为本发明实施例一图像处理方法流程示意图,如图1所示,本实施例图像处理方法流程包括:
步骤101:对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;
本步骤之前,所述方法还包括:对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类;
在本发明实施例中,所述检测对象可以为人物对象;所述对样本照片进行检测对象识别包括:对样本照片进行人脸识别。
近一步的,所述依据识别出的不同的检测对象对样本照片进行分类之前所述方法还包括:对样本照片进行预处理;图2所示为本发明实施例对样本照片进行预处理的方法流程示意图,如图2所示,本实施例对样本照片进行预处理的方法包括:
步骤2a:识别样本文件是样本照片还是样本影像文件,如果是样本照片,执行步骤2b;如果是样本影像文件,执行步骤2c;
这里,所述识别样本文件是样本照片还是样本影像文件包括:通过样 本文件的格式识别所述样本文件是样本照片或者样本影像文件;如文件格式为.jpg/jpeg的为样本照片,文件格式为.mp4的为样本影像文件。
步骤2b:解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类,结束本处理流程;
这里,所述拍摄时间和拍摄地点的就近原则包括:拍摄时间就近且拍摄地点就近的原则;其中,拍摄时间就近是指照片的拍摄时间距离最近,拍摄地点就近是指照片拍摄的地点距离最近;如:拍摄时间均为2014年9月10日、拍摄地点均为天安门的样本照片分为一类。
步骤2c:将所述样本影像文件截取为若干样本照片,执行步骤2b;
这里,所述将所述样本影像文件截取为若干样本照片包括:通过运动影像预测算法,依据检测对象的存在情况将所述样本影像文件截取为若干样本照片。
在一实施例中,所述对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征包括:
对不同检测对象的样本照片进行对象属性识别,将识别出的对象属性与预设数据库中的对比标准参照物进行比较,获取所述检测对象的属性特征;
这里,所述对象属性可以包括:皱纹、身材、肤色等;
所述属性特征可以包括:性别、年龄、身高、体重等。
步骤102:依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;
这里,所述依据不同的参考特征获取对应所述检测对象的不同的对象特征序列包括:
对所述检测对象的样本照片进行M个参考特征的识别,并依据识别出的M个参考特征获得对应所述参考特征的对象特征,并对获得的所有样本照片的M个参考特征对应的对象特征进行排序,获得所述M个参考特征对应所述检测对象的M个对象特征序列;
所述依据不同的参考特征获取对应所述检测对象的不同的对象特征序列包括:
对样本照片进行第一参考特征的识别,并将获得的第一参考特征与预设特征数据库中的第一参考特征信息进行匹配,获得所述第一参考特征对应所述检测对象的对象特征,对获得所述检测对象的所有样本照片的对象特征按时间进行排序,获得所述检测对象的第一对象特征序列;
对样本照片进行第二参考特征的识别,并将获得的第二参考特征与预设特征数据库中的第二参考特征信息进行匹配,获得所述第二参考特征对应的所述检测对象的对象特征,对获得所述检测对象的所有样本照片的对象特征按时间进行排序,获得所述检测对象的第二对象特征序列;
以此类推,对样本照片进行第M参考特征的识别,并将获得的第一参考特征与预设特征数据库中的第M参考特征信息进行匹配,获得所述第M参考特征对应的所述检测对象的对象特征,对获得所述检测对象的所有对象特征按时间进行排序,获得所述检测对象的第M对象特征序列;其中,所述M为正整数;
这里,所述M的取值可以依据实际情况进行设定,且当M取值大于2时,应用本发明实施例的方法时,可采用M个参考特征中的任意一个或多个组合来实现。
在本发明一个实施例中,所述M的取值可以为5;所述第一参考特征可以为面部表情特征;所述第二参考特征可以为动作特征;所述第三参考特征可以为人际交互特征;所述第四参考特征可以为天气情况特征;第五 参考特征可以为背景特定物体特征;
图3所示为本发明实施例对样本照片进行面部表情情感特征识别的方法流程示意图,如图3所示,本实施例对样本照片进行面部表情情感特征识别的方法包括:
步骤3a:对人物对象的样本照片进行面部表情识别;
这里,所述面部表情特征包括:微笑、大笑、板脸等。
步骤3b:将获得的面部表情特征与预设特征数据库中的面部表情特征信息进行匹配,获得所述面部表情特征对应的所述人物对象的面部表情情感特征;
这里,所述情感特征包括:喜、怒、哀、乐等。
步骤3c:对获得的所述人物对象的所有样本照片的面部表情情感特征按时间进行排序,获得所述人物对象的面部表情情感特征序列。
图4所示为本发明实施例对样本照片进行动作情感特征识别的方法流程示意图,如图4所示,本实施例对样本照片进行动作情感特征识别的方法包括:
步骤4a:对人物对象的样本照片进行动作特征识别;
这里,所述动作特征包括:跳跃、摆V字等。
步骤4b:将获得的动作特征与预设特征数据库中的动作特征信息进行匹配,获得所述动作特征对应的所述人物对象的动作情感特征;
这里,所述动作情感特征包括:活泼、好动、沉静、稳重等。
步骤4c:对获得的所述人物对象的所有样本照片的动作情感特征按时间进行排序,获得所述人物对象的动作情感特征序列。
图5所示为本发明实施例对样本照片进行人际交互情感特征识别的方法流程示意图,如图5所示,本实施例对样本照片进行人际交互情感特征识别的方法包括:
步骤5a:对人物对象的样本照片进行人际交互特征识别;
这里,所述人际交互特征包括:所述人物对象与样本照片中其他人物对象的交互关系及位置关系,如:与某人物对象拥抱或与其他人物对象间隔较远等。
步骤5b:将获得的人际交互特征与预设特征数据库中的人际交互特征信息进行匹配,获得所述人际交互特征对应的所述人物对象的人际交互情感特征;
这里,所述人际交互情感特征包括:热情、富有亲和力、冷漠等。
步骤5c:对获得的所述人物对象的所有样本照片的情感特征按时间进行排序,获得所述人物对象的人际交互情感特征序列。
图6所示为本发明实施例对样本照片进行历史天气情感特征识别的方法流程示意图,如图6所示,本实施例对样本照片进行历史天气情感特征识别的方法包括:
步骤6a:对人物对象的样本照片的天气情况特征进行识别;
这里所述天气情况特征包括:晴天、阴天、雨天等。
步骤6b:将获得的天气情况特征与预设特征数据库中的天气情况特征信息进行匹配,获得所述天气情况特征对应的所述人物对象的天气情况情感特征;
这里,所述天气情况情感特征包括:开朗、忧郁、浪漫等。
步骤6c:对获得的所述人物对象的所有样本照片的天气情况情感特征按时间进行排序,获得所述人物对象的天气情况情感特征序列。
图7所示为本发明实施例对样本照片进行背景特定物体情感特征识别的方法流程示意图,如图7所示,本实施例对样本照片进行背景特定物体情感特征识别的方法包括:
步骤7a:对人物对象的样本照片的背景特定物体进行识别;
这里,所述背景特定物体包括:花草、历史人文古迹等。
步骤7b:将获得的背景特定物体特征与预设特征数据库中的背景特定物体特征信息进行匹配,获得所述特定物体特征对应的所述人物对象的背景特定物体情感特征;
这里,所述背景特定物体情感特征包括:爱旅游、爱历史等。
步骤7c:对获得的所述人物对象的所有样本照片的情感特征按时间进行排序,获得所述人物对象的背景特定物体情感特征序列。
对所述M个对象特征序列进行加权处理,获取所述检测对象的特征向量和综合特征向量包括:
对所述M个对象特征序列进行加权处理,获得所述检测对象的特征曲线图,并依据所述特征曲线图获得所述检测对象的特征向量,将所述特征向量与特征数据库中综合向量模型进行匹配,获得所述检测对象的综合特征向量;
这里,所述加权处理过程中的加权参数可依据实际情况进行设定;
所述特征向量为所述检测对象在特定时间的特征向量,如特定时间的情感向量。
本发明实施例中所述特征数据库为依据经验统计预设的数据库。
当所述检测对象为人物对象时,所述特征向量可以为人物对象在特定时间的性格取向的指数,指数范围为(1,10),所述综合特征向量可以为人物对象整体的性格取向;如:某人物对象在早晨8点的开朗指数为8,其整体开朗指数即综合特征向量为5。
步骤103:依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息;
这里,所述依据所述属性特征、所述特征向量和综合特征向量获得所 述检测对象的标识特征包括:
将所述属性特征、所述特征向量和综合特征向量与特征数据库中标识特征模型进行匹配,获得所述检测对象的标识特征;
所述依据所述标识特征输出对应所述检测对象的多媒体信息包括:
将所述标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出相应的多媒体信息。
在本发明实施例中,所述标识特征可以为性格特征;所述交互类型包括:所述检测对象在特定时间段内的人际交往取向;例如:某人物对象在晚上8点到10点间比较open,乐意与人交往;
所述多媒体信息包括:文字、图片、音乐、影像等。
图8为本发明实施例二图像处理方法流程示意图,如图8所示,本实施例图像处理方法流程包括:
步骤801:识别样本文件是样本照片还是样本影像文件,如果是样本照片执行步骤802;如果是样本影像文件,执行步骤807;
本步骤包括:通过样本文件的格式识别所述样本文件是样本照片或者样本影像文件;如文件格式为.jpg/jpeg的为样本照片,文件格式为.mp4的为样本影像文件。
步骤802:依据拍摄时间和拍摄地点的就近原则对样本照片进行分类;
本步骤包括:解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类;
其中,所述拍摄时间和拍摄地点的就近原则包括:拍摄时间就近且拍摄地点就近的原则;其中,拍摄时间就近是指照片的拍摄时间距离最近,拍摄地点就近是指照片拍摄的地点距离最近;如:拍摄时间均为2014年9月10日、拍摄地点均为天安门的样本照片分为一类。
步骤803:依据不同的检测对象对样本照片进行分类,并依据分类结果对所述检测对象进行对象属性分析,获得所述检测对象的属性特征;
这里,所述检测对象为人物对象,所述对象属性可以包括:皱纹、身材、肤色等;所述属性特征可以包括:性别、年龄、身高、体重等;
所述依据不同的检测对象对样本照片进行分类包括:对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类;即:对样本照片进行不同人物对象的识别,并依据识别出的不同的人物对象对样本照片进行分类;包括:通过人脸识别算法识别样本照片中所有人脸,并依据识别出的不同的人物对象对样本照片进行分类;
所述依据分类结果对所述检测对象进行对象属性分析,获得所述检测对象的属性特征包括:
对不同检测对象的样本照片进行对象属性识别,将识别出的对象属性与预设数据库中的对比标准参照物进行比较,获取所述检测对象的属性特征。
步骤804:依据检测对象的M个参考特征获取对应的M个对象特征序列;
本步骤包括:对所述检测对象的样本照片进行M个参考特征的识别,并依据识别出的M个参考特征获得对应所述参考特征的对象特征,并对获得的所有样本照片的M个参考特征对应的对象特征进行排序,获得所述M个参考特征对应所述检测对象的M个对象特征序列;
本步骤包括:
对样本照片进行第一参考特征的识别,并将获得的第一参考特征与预设特征数据库中的第一参考特征信息进行匹配,获得所述第一参考特征对应所述检测对象的对象特征,对获得所述检测对象的所有样本照片的对象特征按时间进行排序,获得所述检测对象的第一对象特征序列;
对样本照片进行第二参考特征的识别,并将获得的第二参考特征与预设特征数据库中的第二参考特征信息进行匹配,获得所述第二参考特征对应的所述检测对象的对象特征,对获得所述检测对象的所有样本照片的对象特征按时间进行排序,获得所述检测对象的第二对象特征序列;
以此类推,对样本照片进行第M参考特征的识别,并将获得的第一参考特征与预设特征数据库中的第M参考特征信息进行匹配,获得所述第M参考特征对应的所述检测对象的对象特征,对获得所述检测对象的所有对象特征按时间进行排序,获得所述检测对象的第M对象特征序列;其中,所述M为正整数;
这里所述M的取值可以依据实际情况进行设定,在本发明实施例中,所述M的取值为5;
所述第一参考特征为面部表情特征;所述第二参考特征为动作特征;所述第三参考特征为人际交互特征;所述第四参考特征为天气情况特征;第五参考特征为背景特定物体特征;
当所述第一参考特征为面部表情特征时,依据检测对象的参考特征获取对应的对象特征序列包括:
对某人物对象的样本照片进行面部表情识别,如:微笑、大笑、板脸等,将获得的面部表情特征与预设特征数据库中的面部表情特征信息进行匹配,获得所述面部表情特征对应的所述人物对象的情感特征,如:喜、怒、哀、乐等,对获得的所述人物对象的所有样本照片的情感特征按时间进行排序,获得所述人物对象的第一情感特征序列;
当所述第二参考特征为动作特征时,依据检测对象的参考特征获取对应的对象特征序列包括:
对某人物对象的样本照片进行动作特征识别,如:跳跃、摆V字等,将获得的动作特征与预设特征数据库中的动作特征信息进行匹配,获得所 述动作特征对应的所述人物对象的情感特征,如:活泼、好动、沉静、稳重等,对获得的所述人物对象的所有样本照片的情感特征按时间进行排序,获得所述人物对象的第二情感特征序列;
当所述第三参考特征为人际交互特征时,依据检测对象的参考特征获取对应的对象特征序列包括:
对某人物对象的样本照片进行人际交互特征识别,即识别某人物对象与样本照片中其他人物对象交互关系及位置关系,如:与某人物对象拥抱或与其他人物对象间隔较远,将获得的人际交互特征与预设特征数据库中的人际交互特征信息进行匹配,获得所述人际交互特征对应的所述人物对象的情感特征,如:热情、富有亲和力、冷漠等;对获得的所述人物对象的所有样本照片的情感特征按时间进行排序,获得所述人物对象的第三情感特征序列;
当所述第四参考特征为历史天气情况特征时,依据检测对象的参考特征获取对应的对象特征序列包括:
对某人物对象的样本照片的天气情况进行识别,如晴天、阴天、雨天等,将获得的天气情况特征与预设特征数据库中的天气情况特征信息进行匹配,获得所述天气情况特征对应的所述人物对象的情感特征,如:开朗、忧郁、浪漫等,对获得的所述人物对象的所有样本照片的情感特征按时间进行排序,获得所述人物对象的第四情感特征序列;
当所述第五参考特征为背景特定物体特征时,依据检测对象的参考特征获取对应的对象特征序列包括:
对某人物对象的样本照片的背景特定物体进行识别,如背景为花草、历史人文古迹等,将获得的背景特定物体特征与预设特征数据库中的背景特定物体特征信息进行匹配,获得所述特定物体特征对应的所述人物对象的情感特征,如:爱旅游、爱历史等,对获得的所述人物对象的所有样本 照片的情感特征按时间进行排序,获得所述人物对象的第五情感特征序列。
步骤805:对所述M个对象特征序列进行加权处理,获取所述检测对象的特征向量和综合特征向量;
本步骤包括:对所述M个对象特征序列进行加权处理,获得所述检测对象的特征曲线图,并依据所述特征曲线图获得所述检测对象的特征向量,将所述特征向量与特征数据库中综合向量模型进行匹配,获得所述检测对象的综合特征向量;其中,所述特征向量为所述检测对象在特定时间的特征向量;
所述特征向量可以为人物对象在特定时间性格取向的指数,指数范围为(1,10),如:某人物对象的在早晨8点的开朗指数为8,其整体开朗指数即综合特征向量为5。
在本实施例中,上述处理过程包括:对上述5个对象特征序列进行加权处理,获得所述人物对象的情感特征曲线图,并依据所述情感特征曲线图获得所述人物对象在特定时刻性格取向的指数,将所述特定时刻性格取向的指数与特征数据库中综合性格取向模型进行匹配,获得所述人物对象的综合性格取向指数。
本发明实施例中所述特征数据库为依据经验统计预设的数据库,所述数据库中各模型为依据经验统计预设的数据模型。
步骤806:依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息,执行步骤808;
这里,所述标识特征为性格特征;所述多媒体信息包括:文字、图片、音乐、影像等;
所述依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征包括:
将所述属性特征、所述特征向量和综合特征向量与特征数据库中标识特征模型进行匹配,获得所述检测对象的标识特征;包括:将获得的性格特征、特定时间的性格取向指数及综合性格取向指数与特征数据库中性格特征模型进行匹配,获得所述人物对象的性格特征;
所述依据所述标识特征输出对应所述检测对象的多媒体信息包括:
将所述标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出相应的多媒体信息;包括:将获得的性格特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述人物对象的交互类型,并依据所述交互类型输出相应的多媒体信息;
其中,所述交互类型包括:所述检测对象在特定时间段内的人际交往取向;例如:某人物对象在晚上8点到10点间比较open,乐意与其他人交往。
步骤807:将所述样本影像文件截取为若干样本照片,并执行步骤802;
本步骤包括:通过运动影像预测算法,依据检测对象的存在情况将所述样本影像文件截取为若干样本照片。
步骤808:结束本次处理流程。
图9为本发明实施例图像处理装置组成结构示意图,如图9所示,本发明实施例图像处理装置组成包括:获取模块91、处理模块92及输出模块93;其中,
所述获取模块91,配置为对所述检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;
所述处理模块92,配置为依据不同的参考特征获取对应的所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;
所述输出模块93,配置为依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。
在一实施例中,所述装置还包括:分类模块94,配置为对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类。
在一实施例中,所述装置还包括:预处理模块95,配置为解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类;
其中,所述拍摄时间和拍摄地点的就近原则包括:拍摄时间就近且拍摄地点就近的原则;
在一实施例中,所述预处理模块95,还配置为识别样本文件是样本照片还是样本影像文件,如果是样本影像文件,将所述样本影像文件截取为若干样本照片;
其中,所述识别样本文件是样本照片还是样本影像文件包括:通过样本文件的格式识别所述样本文件是样本照片或者样本影像文件;如文件格式为.jpg/jieg为样本照片,文件格式为.mp4为样本影像文件;
所述预处理模块95将所述样本影像文件截取为若干样本照片包括:预处理模块95通过运动影像预测算法,依据检测对象的存在情况将所述样本影像文件截取为若干样本照片。
在一实施例中,所述处理模块92依据不同的参考特征获取对应所述检测对象的不同的对象特征序列包括:所述处理模块92对所述检测对象的样本照片进行M个参考特征的识别,并依据识别出的M个参考特征获得对应所述参考特征的对象特征,并对获得的所有样本照片的M个参考特征对应的对象特征进行排序,获得所述M个参考特征对应所述检测对象的M个对象特征序列;包括:
对样本照片进行第一参考特征的识别,并将获得的第一参考特征与预设特征数据库中的第一参考特征信息进行匹配,获得所述第一参考特征对应所述检测对象的对象特征,对获得所述检测对象的所有样本照片的对象特征按时间进行排序,获得所述检测对象的第一对象特征序列;
对样本照片进行第二参考特征的识别,并将获得的第二参考特征与预设特征数据库中的第二参考特征信息进行匹配,获得所述第二参考特征对应的所述检测对象的对象特征,对获得所述检测对象的所有样本照片的对象特征按时间进行排序,获得所述检测对象的第二对象特征序列;
以此类推,对样本照片进行第M参考特征的识别,并将获得的第一参考特征与预设特征数据库中的第M参考特征信息进行匹配,获得所述第M参考特征对应的所述检测对象的对象特征,对获得所述检测对象的所有对象特征按时间进行排序,获得所述检测对象的第M对象特征序列;其中,所述M为正整数;
这里所述M的取值可以依据实际情况进行设定。
在一实施例中,所述输出模块93依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征包括:
将所述属性特征、所述特征向量和综合特征向量与预设特征数据库中的标识特征模型进行匹配,获得所述检测对象的标识特征。
在一实施例中,所述输出模块93依据所述标识特征输出对应所述检测对象的多媒体信息包括:所述输出模块93将所述标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出对应所述检测对象的多媒体信息。
本发明实施例中提出的获取模块、处理模块、输出模块、分类模块及预处理模块都可以通过处理器来实现,当然也可通过具体的逻辑电路实现;其中所述处理器可以是移动终端或服务器上的处理器,在实际应用中,处 理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
本发明实施例中,如果以软件功能模块的形式实现上述图像处理方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机程序,该计算机程序配置为执行本发明实施例的上述图像处理方法。
以上所述仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (13)

  1. 一种图像处理方法,所述方法包括:
    对检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;
    依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;
    依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。
  2. 根据权利要求1所述方法,其中,所述对检测对象的样本照片进行对象属性分析之前,所述方法还包括:
    对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类。
  3. 根据权利要求2所述方法,其中,所述依据识别出的不同的检测对象对样本照片进行分类之前,所述方法还包括:解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类。
  4. 根据权利要求1或2所述方法,其中,所述依据不同的参考特征获取对应所述检测对象的不同的对象特征序列包括:
    对所述检测对象的样本照片进行不同参考特征的识别,依据识别出的参考特征获得对应的对象特征,对获得的所有样本照片的不同的参考特征对应的对象特征进行排序,获得不同参考特征对应所述检测对象的不同的对象特征序列。
  5. 根据权利要求1或2所述方法,其中,所述依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征包括:
    将所述属性特征、所述特征向量和综合特征向量与预设特征数据库中的标识特征模型进行匹配,获得所述检测对象的标识特征。
  6. 根据权利要求1或2所述方法,其中,所述依据所述标识特征输出对应所述检测对象的多媒体信息包括:
    将所述检测对象的标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出对应所述检测对象的多媒体信息。
  7. 一种图像处理装置,所述装置包括:获取模块、处理模块及输出模块;其中,
    所述获取模块,配置为对所述检测对象的样本照片进行对象属性分析,获得所述检测对象的属性特征;
    所述处理模块,配置为依据不同的参考特征获取对应所述检测对象的不同的对象特征序列,并对获得的对象特征序列进行加权处理,得到所述检测对象的特征向量和综合特征向量;
    所述输出模块,配置为依据所述属性特征、所述特征向量和综合特征向量获得所述检测对象的标识特征,并依据所述标识特征输出对应所述检测对象的多媒体信息。
  8. 根据权利要求7所述装置,其中,所述装置还包括:分类模块,配置为对样本照片进行检测对象识别,并依据识别出的不同的检测对象对样本照片进行分类。
  9. 根据权利要求8所述装置,其中,所述装置还包括:预处理模块,配置为解析样本照片,获得所有样本照片的拍摄时间信息和拍摄地点信息,并依据拍摄时间和拍摄地点的就近原则对样本照片进行分类。
  10. 根据权利要求7或8所述装置,其中,所述处理模块,配置为对所述检测对象的样本照片进行不同参考特征的识别,依据识别出的参考特 征获得对应的对象特征,并对获得的所有样本照片的不同的参考特征对应的对象特征进行排序,获得不同参考特征对应所述检测对象的不同的对象特征序列。
  11. 根据权利要求7或8所述装置,其中,所述输出模块,配置为将所述属性特征、所述特征向量和综合特征向量与预设特征数据库中的标识特征模型进行匹配,获得所述检测对象的标识特征。
  12. 根据权利要求7或8所述装置,其中,所述输出模块,配置为将所述检测对象的标识特征信息与预设特征数据库中的对象交互模型进行匹配,获得所述检测对象的交互类型,并依据所述交互类型输出对应所述检测对象的多媒体信息。
  13. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1至6任一项所述的图像处理方法。
PCT/CN2015/079112 2014-10-08 2015-05-15 一种图像处理方法、装置及存储介质 WO2016054918A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410529494.2 2014-10-08
CN201410529494.2A CN105488516A (zh) 2014-10-08 2014-10-08 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2016054918A1 true WO2016054918A1 (zh) 2016-04-14

Family

ID=55652549

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/079112 WO2016054918A1 (zh) 2014-10-08 2015-05-15 一种图像处理方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN105488516A (zh)
WO (1) WO2016054918A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537058A (zh) * 2021-07-16 2021-10-22 山东新一代信息产业技术研究院有限公司 一种陌生人物关联关系的判断方法及安防控制系统

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447741A (zh) * 2016-11-30 2017-02-22 努比亚技术有限公司 照片自动合成方法及系统
CN108764210B (zh) * 2018-06-12 2019-11-15 焦点科技股份有限公司 一种基于参照物的猪重识别的方法及系统
CN108962093B (zh) * 2018-07-13 2020-07-10 法瑞新科技(江西)有限公司 一种电子相框

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016532B2 (en) * 2000-11-06 2006-03-21 Evryx Technologies Image capture and identification system and process
CN101163199A (zh) * 2006-08-02 2008-04-16 索尼株式会社 图像拍摄设备和方法、表情评价设备、及程序
CN101393599A (zh) * 2007-09-19 2009-03-25 中国科学院自动化研究所 一种基于人脸表情的游戏角色控制方法
CN101409070A (zh) * 2008-03-28 2009-04-15 徐开笑 基于运动图像解析的音乐重构方法
US20110211737A1 (en) * 2010-03-01 2011-09-01 Microsoft Corporation Event Matching in Social Networks
CN102271241A (zh) * 2011-09-02 2011-12-07 北京邮电大学 一种基于面部表情/动作识别的图像通信方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841438B (zh) * 2012-11-21 2016-08-03 腾讯科技(深圳)有限公司 信息推送方法、信息推送系统及数字电视接收终端
CN103744858B (zh) * 2013-12-11 2017-09-22 深圳先进技术研究院 一种信息推送方法及系统
CN103716702A (zh) * 2013-12-17 2014-04-09 三星电子(中国)研发中心 电视节目推荐装置和方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7016532B2 (en) * 2000-11-06 2006-03-21 Evryx Technologies Image capture and identification system and process
CN101163199A (zh) * 2006-08-02 2008-04-16 索尼株式会社 图像拍摄设备和方法、表情评价设备、及程序
CN101393599A (zh) * 2007-09-19 2009-03-25 中国科学院自动化研究所 一种基于人脸表情的游戏角色控制方法
CN101409070A (zh) * 2008-03-28 2009-04-15 徐开笑 基于运动图像解析的音乐重构方法
US20110211737A1 (en) * 2010-03-01 2011-09-01 Microsoft Corporation Event Matching in Social Networks
CN102271241A (zh) * 2011-09-02 2011-12-07 北京邮电大学 一种基于面部表情/动作识别的图像通信方法及系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537058A (zh) * 2021-07-16 2021-10-22 山东新一代信息产业技术研究院有限公司 一种陌生人物关联关系的判断方法及安防控制系统
CN113537058B (zh) * 2021-07-16 2023-12-15 山东新一代信息产业技术研究院有限公司 一种陌生人物关联关系的判断方法及安防控制系统

Also Published As

Publication number Publication date
CN105488516A (zh) 2016-04-13

Similar Documents

Publication Publication Date Title
US10900772B2 (en) Apparatus and methods for facial recognition and video analytics to identify individuals in contextual video streams
US10445562B2 (en) AU feature recognition method and device, and storage medium
Yao et al. Capturing au-aware facial features and their latent relations for emotion recognition in the wild
Sangineto et al. We are not all equal: Personalizing models for facial expression analysis with transductive parameter transfer
Abd El Meguid et al. Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers
US9311530B1 (en) Summarizing a photo album in a social network system
US9477685B1 (en) Finding untagged images of a social network member
US9542419B1 (en) Computer-implemented method for performing similarity searches
CN105917305B (zh) 基于图像情感内容的过滤和快门拍摄
WO2019095571A1 (zh) 人物情绪分析方法、装置及存储介质
EP2915101A1 (en) Method and system for predicting personality traits, capabilities and suggested interactions from images of a person
US10652454B2 (en) Image quality evaluation
WO2016054918A1 (zh) 一种图像处理方法、装置及存储介质
US9286710B2 (en) Generating photo animations
US20170185676A1 (en) System and method for profiling a user based on visual content
US20210312678A1 (en) Generating augmented reality experiences with physical products using profile information
KR20220163430A (ko) 메시징 시스템에서의 증강 현실 경험을 위한 물리적 제품들의 식별
KR20230013280A (ko) 클라이언트 애플리케이션 콘텐츠 분류 및 발견
US20140233811A1 (en) Summarizing a photo album
US9621505B1 (en) Providing images with notifications
EP2905678A1 (en) Method and system for displaying content to a user
Granda et al. Face recognition systems in math classroom through computer vision traditional techniques
CN106537417B (zh) 总结相册
CN115484474A (zh) 视频剪辑处理方法、装置、电子设备及存储介质
Khansama et al. A hybrid face recognition scheme in a heterogenous and cluttered environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15849672

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15849672

Country of ref document: EP

Kind code of ref document: A1