WO2018028253A1 - 一种信息处理方法、装置及存储介质 - Google Patents

一种信息处理方法、装置及存储介质 Download PDF

Info

Publication number
WO2018028253A1
WO2018028253A1 PCT/CN2017/082912 CN2017082912W WO2018028253A1 WO 2018028253 A1 WO2018028253 A1 WO 2018028253A1 CN 2017082912 W CN2017082912 W CN 2017082912W WO 2018028253 A1 WO2018028253 A1 WO 2018028253A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
feature
target file
feature information
video
Prior art date
Application number
PCT/CN2017/082912
Other languages
English (en)
French (fr)
Inventor
肖非
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018028253A1 publication Critical patent/WO2018028253A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Definitions

  • the present disclosure relates to the field of communications, and in particular, to an information processing method, apparatus, and storage medium.
  • the technical problem to be solved by the embodiments of the present disclosure is to provide an information processing method, device, and storage medium.
  • a picture or video is stored, it is inconvenient for the user to search and browse, and the user experience is poor. Can not meet the needs of users.
  • an embodiment of the present disclosure provides an information processing method, including:
  • the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information matching the feature recognition result according to a preset rule.
  • the determining the feature information that matches the feature recognition result according to the preset rule includes: searching for the feature recognition result in the preset biometric information template set, and obtaining the matching with the feature recognition result.
  • the biometric information template is configured to find feature information corresponding to the biometric information template according to a correspondence between the preset biometric information template and the feature information.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the method further includes:
  • an information processing apparatus including:
  • a correspondence establishing module configured to establish a correspondence between the feature information and the target file
  • the saving module is configured to classify and store the target file based on the feature information and the correspondence.
  • the obtaining module includes:
  • a first extraction submodule configured to perform feature recognition on an image in the target file to obtain a feature recognition result
  • Determining a submodule configured to determine, according to a preset rule, a match with the feature recognition result Feature information.
  • the determining submodule is configured to search for the feature recognition result in a preset biometric information template set, and obtain a biometric information template that matches the feature recognition result;
  • the feature information corresponding to the biometric information template is found according to a correspondence between the preset biometric information template and the feature information.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the device further includes:
  • a receiving module configured to receive a call request, where the call request carries feature information of the target file
  • a search module configured to search for a target file corresponding to the feature information in the correspondence relationship
  • an embodiment of the present disclosure further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the information processing method.
  • An advantageous aspect of the present disclosure is: an information processing method, apparatus, and storage medium according to an embodiment of the present disclosure, the information processing method includes acquiring feature information of an object file, where the target file includes a picture or a video; correspondingly, acquiring The feature information of the target file includes at least parsing the image feature information from the image of the image or the video; establishing a correspondence between the feature information and the target file, and correspondingly the feature information, the target file, and the corresponding Relationship preservation; using the above scheme, when storing pictures or videos, the pictures or videos are classified and stored according to the feature information of the pictures or videos, which facilitates the user to quickly find and browse, and improves the use. User experience, better meet the needs of users.
  • FIG. 1 is a schematic flowchart of processing of an information processing method according to Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic flowchart of processing of an information processing method according to Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic structural diagram of a structure of an information processing apparatus according to Embodiment 3 of the present disclosure
  • FIG. 4 is a schematic structural diagram of an information processing apparatus according to Embodiment 4 of the present disclosure.
  • the first embodiment provides an information processing method.
  • the processing flow of the information processing method includes the following steps:
  • Step S101 acquiring feature information of the target file
  • the target file includes: a picture or a video
  • the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information that matches the feature recognition result according to a preset rule;
  • the determining, according to the preset rule, the feature information that matches the feature recognition result includes: searching for the feature recognition result in the preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result. And searching for the feature information corresponding to the biometric information template according to the correspondence between the preset biometric information template and the feature information.
  • the feature recognition includes: biometric recognition and environmental feature recognition; the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information. .
  • the feature information includes at least image feature information
  • the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
  • the feature information may further include environment feature information.
  • the acquiring the feature information of the target file may further include: acquiring environment feature information of the current shooting environment when the picture or video is captured; and the environment feature information includes: shooting time information At least one of the shooting location information and the shooting weather information; the shooting location information can be obtained by a Global Positioning System (GPS) function of the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • GPS Global Positioning System
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the parsing the image feature information from the image of the image or the video comprises: performing face recognition on the image in the image or the image of the target frame in the video, and performing biometric information extraction on the recognized face image;
  • the biometric information generates image feature information according to a preset rule.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the biometric information in Table 1 The content of the template is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information, including: left and right corner feature points and upper and lower lip feature points, each feature point has a position Coordinate values, according to these coordinate values, the height difference of the mouth angle up or down and the feature points of the upper and lower lips can be calculated.
  • facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored. Go to the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes: extracting the target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the previous and subsequent frames are parsed, and the calculation amount is large. Therefore, in this embodiment, only the I frame is parsed. Compressed video is inserted in the fixed interval to ensure that the B and P frame images are restored without affecting the subsequent restored video due to the noise introduced by the preceding and succeeding frames.
  • I frame of the entire picture information the maximum interval is 300 frames (usually the frame rate is greater than 30 frames / sec).
  • the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
  • Step S102 Establish a correspondence between the feature information and the target file, and classify and store the target file based on the feature information and the correspondence relationship;
  • the correspondence between the image feature information and the picture or video is established.
  • the correspondence between the shooting time information and/or the shooting location information and/or the shooting weather information may be established, and the shooting time information and / or shooting location information and / or shooting weather information, pictures or videos and correspondence saved.
  • the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image may also be generated by generating a log, which may be saved to the mobile terminal.
  • a log which may be saved to the mobile terminal.
  • the user can choose to upload the log to the cloud, and the cloud database and the mobile terminal are consistently localized as a local backup of the mobile terminal.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information by automatically generating text or expression packets and forming a log, which is convenient for the user to manage. Reason.
  • the information processing method includes acquiring feature information of an object file, where the target file includes a picture or a video, and acquiring the feature information of the target file includes at least parsing the image from the image of the image or the video.
  • Feature information establishing a correspondence relationship between the feature information and the target file, and saving the feature information, the object file, and the correspondence relationship; using the above scheme, when storing the image or the video, performing the image or video according to the feature information of the image or the video Classified storage makes it easy for users to quickly find and browse, which improves the user experience and better meets user needs.
  • the second embodiment provides an information processing method.
  • the processing flow of the information processing method is as shown in FIG. 2, and includes the following steps:
  • Step S201 acquiring feature information of the target file
  • the target file includes: a picture or a video
  • the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information that matches the feature recognition result according to a preset rule;
  • the determining, according to the preset rule, the feature information that matches the feature recognition result includes: searching for the feature recognition result in the preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result. And searching for the feature information corresponding to the biometric information template according to the correspondence between the preset biometric information template and the feature information.
  • the feature recognition includes: biometric recognition and environmental feature recognition.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the feature information includes at least image feature information, and the image feature information may include a map
  • the expression information of a person in a movie or a video image can reflect the mood of the character from the expression information, for example, the smile indicates that the mood is happy.
  • the feature information may further include environment feature information.
  • the acquiring the feature information of the target file may further include: acquiring environment feature information of the current shooting environment when the picture or video is captured; and the environment feature information includes: shooting time information At least one of the shooting location information and the shooting weather information; the shooting location information can be obtained by a Global Positioning System (GPS) function of the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • GPS Global Positioning System
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the parsing the image feature information from the image of the image or the video comprises: performing face recognition on the image in the image or the image of the target frame in the video, and performing biometric information extraction on the recognized face image;
  • the biometric information generates image feature information according to a preset rule.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the content of the biometric information template in Table 1 is simplified, and the actual biometric information template adopts a facial expression feature letter.
  • the geometrical feature information of the mouth is divided into: the left and right corner feature points and the upper and lower lip feature points. Each feature point has a position coordinate value, and the mouth angle can be calculated according to the coordinate values, and the upper and lower lips can be calculated. The height difference of the feature points.
  • facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored to In the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes: extracting the target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the compressed video will be inserted into the I frame with the complete picture information in the fixed interval.
  • the maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second).
  • Step S202 Establish a correspondence between the feature information and the target file, and store the target file based on the feature information and the corresponding relationship;
  • the correspondence between the image feature information and the picture or video is established.
  • the correspondence between the shooting time information and/or the shooting location information and/or the shooting weather information may be established, and the shooting time information and / or shooting location information and / or shooting weather information, pictures or videos and correspondence saved.
  • the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image may also be generated by generating a log, which may be saved to the mobile terminal.
  • a log which may be saved to the mobile terminal.
  • the user can choose to upload the log to the cloud, and the cloud database and the mobile terminal are consistently localized as a local backup of the mobile terminal.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
  • Step S203 Receive a call request, and carry the feature information of the target file in the call request;
  • the object file includes a picture or a video
  • the feature information includes at least image feature information.
  • the feature information of the target file includes at least image feature information parsed from the image of the picture or the video, and the image feature information may include expression information of the person in the picture or the video image, such as laughing, angry, crying, etc., which may be reflected from the expression information.
  • the mood of the character such as laughing, indicates a happy mood.
  • the feature information may further include at least one of shooting time information, shooting location information, and shooting weather information when the picture or video is captured.
  • Step S204 Find an object file corresponding to the feature information in the correspondence relationship
  • the log may be generated in the form of a log, and the log may be saved.
  • the default feature information is saved by the list linked list data structure. Subsequently, the log can be retrieved and queried according to the log information of the log, and the user can also add feature information according to his own needs, and edit and modify the feature information.
  • logs, photos, or videos by attributes such as time, location, and mood. For example, according to the time period 2016.6.1-2016.6.30, travel, happy mood and follow-up self-added keywords to query the corresponding log, photo or video. Because the log, photo or video also provides editing, modification, query, retrieval and other functions, it is convenient for urban white-collar workers to record the living status, and it is convenient for busy mothers to record the growth of children.
  • Step S205 calling the found target file
  • the found target file may be displayed, sent to a friend or family member, uploaded to a QQ space, a microblog, or a WeChat circle of friends.
  • the file invoking method includes receiving an invoking request, and carrying the feature information of the object file in the invoking request; and according to the feature information and the correspondence relationship saved in the information processing method according to the first embodiment, Find the target text corresponding to the feature information Calling the found target file; using the above scheme, when the image or video is stored, the image or video is stored according to the feature information of the image or the video, which facilitates the user to quickly find and browse, and improves the user. Experience, better meet the needs of users.
  • the embodiment provides an information processing device.
  • the composition of the information processing device as shown in FIG. 3, includes:
  • the obtaining module 301 is configured to acquire feature information of the target file.
  • the target file includes a picture or a video.
  • the correspondence establishing module 302 is configured to establish a correspondence between the feature information and the target file.
  • the saving module 303 is configured to store the target file based on the feature information and the correspondence.
  • the feature information includes at least image feature information; the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
  • the feature information may also include environmental feature information.
  • the obtaining module 301 is further configured to acquire environmental feature information of the current shooting environment when taking a picture or video.
  • the environmental feature information includes at least one of shooting time information, shooting location information, and shooting weather information.
  • the shooting location information can be obtained by the GPS function provided by the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the obtaining module 301 includes: an obtaining sub-module 3011 and a determining sub-module 3012.
  • the obtaining sub-module 3011 is configured to perform feature recognition on an image in the target file to obtain a feature recognition result;
  • the determining sub-module 3012 configured to determine, according to a preset rule, feature information that matches the feature recognition result; where the feature recognition includes: biometric identification And environmental feature recognition.
  • the determining sub-module 3012 is configured to search for the feature recognition result in the preset biometric information template set, and obtain a biometric information template that matches the feature recognition result; according to the preset biometric information template and feature Corresponding relationships between the information, and finding feature information corresponding to the biometric information template.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the content of the biometric information template in Table 1 is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information. Including: left and right corner feature points and upper and lower lip feature points, each feature point has a position coordinate value, according to which the height difference of the mouth angle up or down and the upper and lower lip feature points can be calculated.
  • Different facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance.
  • the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored in the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes a second extraction module 304, configured to extract a target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the compressed video will be inserted into the I frame with the complete picture information in the fixed interval.
  • the maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second).
  • the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
  • the obtaining module 301 After acquiring the image feature information of the picture or video, the obtaining module 301 establishes a correspondence between the image feature information and the picture or video, and the saving module 303 saves the image feature information, the picture or the video, and the corresponding relationship.
  • the correspondence relationship establishing module 302 can also establish a correspondence relationship between the shooting time information and/or the shooting location information and/or the shooting weather information, and the saving module 303 will take the shooting time information and/or the shooting location information and/or the shooting weather information. , pictures or videos and correspondences are saved.
  • the log may be saved in the save module 303, and the user may The log is uploaded to the cloud, and the cloud database and the save module 303 are consistent.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the acquisition module 301 acquires the time, location, weather, and mood information of the person in the picture when the picture is taken. Such as: July 10, Shanghai, sunny, happy.
  • the correspondence relationship establishing module 302 associates the captured video or image with the corresponding time, location, weather, mood, and the like, and may be saved in the save module 303 in the form of a log, and may also add information as needed, such as: Zoo, travel, etc. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
  • An information processing apparatus includes: the acquisition module 301 acquires feature information of an object file, the object file includes a picture or a video, and the feature information of the acquired object file includes at least an image from the picture or video.
  • the image feature information is parsed; the correspondence relationship establishing module 302 establishes a correspondence relationship between the feature information and the target file, and the saving module 303 saves the feature information, the object file, and the corresponding relationship; and when the image or video is stored, according to the above scheme,
  • the feature information of the picture or video is classified and stored in the picture or video, which facilitates the user to quickly find and browse, improves the user experience, and better meets the user's needs.
  • the embodiment provides an information processing device.
  • the composition of the information processing device is as shown in FIG. 4, and includes:
  • the obtaining module 301 is configured to acquire feature information of the target file.
  • the target file includes a picture or a video.
  • the correspondence establishing module 302 is configured to establish a correspondence between the feature information and the target file.
  • the saving module 303 is configured to store the target file based on the feature information and the correspondence.
  • the feature information includes at least image feature information; the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
  • the feature information may also include environmental feature information.
  • the obtaining module 301 is further configured to acquire environmental feature information of the current shooting environment when taking a picture or video.
  • the environmental feature information includes at least one of shooting time information, shooting location information, and shooting weather information.
  • the shooting location information can be obtained by the GPS function provided by the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the receiving module 304 is configured to receive a call request, and the feature information of the target file is carried in the call request.
  • the searching module 305 is configured to search for the target file corresponding to the feature information in the correspondence relationship.
  • the calling module 306 is configured to make a call to the target file.
  • logs, photos, or videos by attributes such as time, location, and mood. For example, according to the time period 2016.6.1-2016.6.30, travel, happy mood and follow-up self-added keywords to query the corresponding log, photo or video. Because the log, photo or video also provides editing, modification, query, retrieval and other functions, it is convenient for urban white-collar workers to record the living status, and it is convenient for busy mothers to record the growth of children.
  • the calling module 306 is configured to invoke the target file found by the searching module 305;
  • the found target file may be displayed, sent to a friend or family member, uploaded to a QQ space, a microblog, or a WeChat circle of friends.
  • the obtaining module 301 includes: an obtaining sub-module 3011 and a determining sub-module 3012.
  • the obtaining sub-module 3011 is configured to perform feature recognition on an image in the target file to obtain a feature recognition result; the determining sub-module 3012.
  • the method is configured to determine, according to a preset rule, feature information that matches the feature recognition result; where the feature recognition includes: biometric recognition and environmental feature recognition.
  • the determining sub-module 3012 is configured to search for the feature recognition result in the preset biometric information template set, and obtain a biometric information template that matches the feature recognition result; according to the preset biometric information template and feature Corresponding relationships between the information, and finding feature information corresponding to the biometric information template.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right. Mouth corner feature points and upper and lower lip feature points, geometry Data such as the center point of the feature map.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the content of the biometric information template in Table 1 is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information. Including: left and right corner feature points and upper and lower lip feature points, each feature point has a position coordinate value, according to which the height difference of the mouth angle up or down and the upper and lower lip feature points can be calculated.
  • facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored to In the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes a second extraction module 304, configured to extract a target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information. It is often necessary to calculate the image based on the previous and subsequent frames to restore the image.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the compressed video will be inserted into the I frame with the complete picture information in the fixed interval.
  • the maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second).
  • the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
  • the obtaining module 301 After acquiring the image feature information of the picture or video, the obtaining module 301 establishes a correspondence between the image feature information and the picture or video, and the saving module 303 saves the image feature information, the picture or the video, and the corresponding relationship.
  • the correspondence relationship establishing module 302 can also establish a correspondence relationship between the shooting time information and/or the shooting location information and/or the shooting weather information, and save The module 303 saves the shooting time information and/or shooting location information and/or shooting weather information, pictures or videos, and correspondences.
  • the log may be saved in the save module 303, and the user may The log is uploaded to the cloud, and the cloud database and the save module 303 are consistent.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the acquisition module 301 acquires the time, location, weather, and mood information of the person in the picture when the picture is taken. Such as: July 10, Shanghai, sunny, happy.
  • Correspondence relationship establishment The module 302 associates the captured video or image with the corresponding time, location, weather, mood, and the like, and may be saved in the save module 303 in the form of a log, and may also add information as needed, such as: safari, travel Wait. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
  • a computer storage medium is stored, the computer storage medium storing computer executable instructions for executing the information processing method described above.
  • the obtaining module 301, the correspondence establishing module 302, the saving module 303, the receiving module 304, the searching module 305, and the calling module 306 in the information processing apparatus proposed in the embodiment of the present disclosure may all be implemented by a processor, and may also be specifically The logic circuit is implemented; wherein the processor may be a server located on an electronic device, and in practical applications, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or Field programmable gate array (FPGA), etc.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA Field programmable gate array
  • modules or steps of the above embodiments of the present disclosure may be implemented by a general computing device, which may be concentrated on a single computing device or distributed among multiple computing devices.
  • they may be implemented by program code executable by the computing device, such that they may be stored in a storage medium (ROM/RAM, disk, optical disk) by a computing device, and in some
  • the steps shown or described may be performed in an order different from that herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of the modules or steps may be implemented as a single integrated circuit module. Therefore, the present disclosure is not limited to any specific combination of hardware and software.
  • the information processing method provided by the embodiment of the present disclosure acquires feature information of the target file, establishes a correspondence between the feature information and the target file, and classifies and stores the target file based on the feature information and the correspondence relationship;
  • the above solution is used to store pictures or videos according to the feature information of the pictures or videos, so that the users can quickly find and browse, which improves the user experience and better meets the user's needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种信息处理方法、装置及存储介质,其中信息处理方法包括获取目标文件的特征信息;建立所述特征信息与所述目标文件的对应关系;基于所述特征信息及所述对应关系对所述目标文件进行分类存储;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。

Description

一种信息处理方法、装置及存储介质
相关申请的交叉引用
本申请基于申请号为201610647621.8.X、申请日为2016年08月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及通信领域,尤其涉及一种信息处理方法、装置及存储介质。
背景技术
随着智能手机的普及,对于智能手机照相、拍摄视频等功能的使用越来越多,甚至已经逐渐取代了数码相机。由于智能手机的便携性,越来越多人习惯用智能手机记录生活、工作中的点点滴滴。但是现有的在对拍摄的图片或视频进行存储时,都是按照拍摄的时间顺序进行存储的,不方便用户进行查找和浏览,用户体验较差,不能满足用户需求。
发明内容
本公开实施例主要解决的技术问题是,提供一种信息处理方法、装置及存储介质,解决现有技术中,对图片或视频进行存储时,不方便用户进行查找和浏览,用户体验较差,不能满足用户需求的问题。
为解决上述技术问题,本公开实施例提供一种信息处理方法,包括:
获取目标文件的特征信息;
建立所述特征信息与所述目标文件的对应关系;
基于所述特征信息及所述对应关系对所述目标文件进行分类存储。
上述方案中,所述获取目标文件的特征信息,包括:对所述目标文件中的图像进行特征识别,得到特征识别结果;按照预设规则确定与所述特征识别结果匹配的特征信息。
上述方案中,所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
上述方案中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
上述方案中,所述方法还包括:
接收调用请求信息,所述调用请求信息中携带目标文件的特征信息;
在所述对应关系中查找与所述特征信息对应的目标文件;
调用所述目标文件。
为解决上述技术问题,本公开实施例提供一种信息处理装置,包括:
获取模块,配置为获取目标文件的特征信息;
对应关系建立模块,配置为建立所述特征信息与所述目标文件的对应关系;
保存模块,配置为基于所述特征信息及所述对应关系对所述目标文件进行分类存储。
上述方案中,所述获取模块包括:
第一提取子模块,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;
确定子模块,配置为按照预设规则确定与所述特征识别结果匹配的 特征信息。
上述方案中,所述确定子模块,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;
根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
上述方案中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
上述方案中,所述装置还包括:
接收模块,配置为接收调用请求,所述调用请求中携带目标文件的特征信息;
查找模块,配置为在所述对应关系中查找所述特征信息对应的目标文件;
调用模块,配置为对所述目标文件进行调用。
为解决上述技术问题,本公开实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,计算机可执行指令用于执行上述的信息处理方法。
本公开的有益效果是:根据本公开实施例提供的一种信息处理方法、装置及存储介质,该信息处理方法包括获取目标文件的特征信息,所述目标文件包括图片或视频;相应的,获取目标文件的特征信息至少包括从图片或视频的图像中解析出图像特征信息;建立所述特征信息与所述目标文件的对应关系,并将所述特征信息、所述目标文件及二者的对应关系保存;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用 户体验,更好的满足了用户需求。
附图说明
图1为本公开实施例一提供的一种信息处理方法的处理流程示意图;
图2为本公开实施例二提供的一种信息处理方法的处理流程示意图;
图3为本公开实施例三提供的一种信息处理装置的组成结构示意图;
图4为本公开实施例四提供的一种信息处理装置的组成结构示意图。
具体实施方式
下面通过具体实施方式结合附图对本公开实施例作进一步详细说明。
实施例一
本实施例一提供一种信息处理方法,所述信息处理方法的处理流程,如图1所示,包括以下步骤:
步骤S101:获取目标文件的特征信息;
这里,所述目标文件包括:图片或视频;
相应的,所述获取目标文件的特征信息至少包括:对所述目标文件中的图像进行特征识别,得到特征识别结果;按照预设规则确定与所述特征识别结果匹配的特征信息;
所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
其中,所述特征识别包括:生物特征识别和环境特征识别;所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
所述特征信息至少包括图像特征信息,所述图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。
所述特征信息还可以包括环境特征信息;相应的,所述获取目标文件的特征信息还可以包括:在拍摄图片或视频时,获取当前拍摄环境的环境特征信息;环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种;拍摄地点信息可以通过移动终端(如手机)自带的全球定位系统(Global Positioning System,GPS)功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。
其中,从图片或视频的图像中解析出图像特征信息包括:对图片中的图像或视频中目标帧的图像进行人脸识别,并对识别出的人脸图像进行生物特征信息提取;根据提取出的生物特征信息按照预设规则生成图像特征信息。
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何特征图的中心点等数据。
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。
Figure PCTCN2017082912-appb-000001
表1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息 模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征分类存储到数据库中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。
其中,对视频中目标帧的图像进行人脸识别之前,还包括:按照预设方式从压缩存储的视频文件中提取出目标帧。
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完 整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。
步骤S102:建立特征信息与目标文件的对应关系,基于所述特征信息及所述对应关系对所述目标文件进行分类存储;
这里,在获取到图片或视频的图像特征信息后,建立图像特征信息与图片或视频的对应关系。
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应关系,还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,并将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式进行保存,可以将其保存到移动终端(如手机)的数据库中,同时用户可以选择把日志上传云端,云端的数据库和移动终端本地保持一致,作为移动终端本地备份。日志默认的特征信息通过list链表数据结构进行保存。
例如,获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在本地数据库,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管 理。
根据本公开实施例提供的一种信息处理方法,该信息处理方法包括获取目标文件的特征信息,目标文件包括图片或视频,获取目标文件的特征信息至少包括从图片或视频的图像中解析出图像特征信息;建立特征信息与目标文件的对应关系,并将特征信息、目标文件及对应关系保存;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。
实施例二
本实施例二提供一种信息处理方法,所述信息处理方法的处理流程如图2所示,包括以下步骤:
步骤S201:获取目标文件的特征信息;
这里,所述目标文件包括:图片或视频;
相应的,所述获取目标文件的特征信息至少包括:对所述目标文件中的图像进行特征识别,得到特征识别结果;按照预设规则确定与所述特征识别结果匹配的特征信息;
所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
其中,所述特征识别包括:生物特征识别和环境特征识别。所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
所述特征信息至少包括图像特征信息,所述图像特征信息可以包括图 片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。
所述特征信息还可以包括环境特征信息;相应的,所述获取目标文件的特征信息还可以包括:在拍摄图片或视频时,获取当前拍摄环境的环境特征信息;环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种;拍摄地点信息可以通过移动终端(如手机)自带的全球定位系统(Global Positioning System,GPS)功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。
其中,从图片或视频的图像中解析出图像特征信息包括:对图片中的图像或视频中目标帧的图像进行人脸识别,并对识别出的人脸图像进行生物特征信息提取;根据提取出的生物特征信息按照预设规则生成图像特征信息。
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何特征图的中心点等数据。
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。
Figure PCTCN2017082912-appb-000002
表1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信 息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征存储到数据库中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。
其中,对视频中目标帧的图像进行人脸识别之前,还包括:按照预设方式从压缩存储的视频文件中提取出目标帧。
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施 例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。
步骤S202:建立特征信息与目标文件的对应关系,基于所述特征信息及所述对应关系对所述目标文件进行存储;
这里,在获取到图片或视频的图像特征信息后,建立图像特征信息与图片或视频的对应关系。
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应关系,还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,并将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式进行保存,可以将其保存到移动终端(如手机)的数据库中,同时用户可以选择把日志上传云端,云端的数据库和移动终端本地保持一致,作为移动终端本地备份。日志默认的特征信息通过list链表数据结构进行保存。
例如,获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在本地数据库,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管理。
步骤S203:接收调用请求,调用请求中携带目标文件的特征信息;
这里,所述目标文件包括图片或视频,所述特征信息至少包括图像特征信息。目标文件的特征信息至少包括从图片或视频的图像中解析出的图像特征信息,图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。特征信息还可以包括在拍摄图片或视频时的拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
步骤S204:在所述对应关系中查找与所述特征信息对应的目标文件;
在一实施例中,针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式进行保存,日志默认的特征信息通过list链表数据结构进行保存。后续可以通过list链表、根据日志的特征信息对日志进行检索、查询,同时用户也能根据自身需要增加特征信息,并对这些特征信息进行编辑、修改。
例如可以通过时间、地点、心情等属性来查询日志、照片或视频。如:根据时间段2016.6.1-2016.6.30、旅游、心情愉快以及后续自行添加的关键词查询到对应的日志、照片或视频。由于对日志、照片或视频还提供了编辑、修改、查询、检索等功能,方便了都市白领记录生活状态、方便了忙碌的妈妈记录孩子成长。
步骤S205:对查找出的目标文件进行调用;
在一实施例中,可以对查找出的目标文件进行显示、发送给朋友或家人、上传至QQ空间、微博、微信朋友圈。
根据本公开实施例提供的一种文件调用方法,该文件调用方法包括接收调用请求,调用请求中携带目标文件的特征信息;根据特征信息以及如实施例一的信息处理方法中保存的对应关系,查找特征信息对应的目标文 件;对查找出的目标文件进行调用;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。
实施例三
本实施例提供一种信息处理装置,所述信息处理装置的组成结构,如图3所示,包括:
获取模块301,配置为获取目标文件的特征信息;
其中,所述目标文件包括图片或视频。
对应关系建立模块302,配置为建立所述特征信息与所述目标文件的对应关系。
保存模块303,配置为基于所述特征信息及所述对应关系对所述目标文件进行存储。
所述特征信息至少包括图像特征信息;所述图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。所述特征信息还可以包括环境特征信息。获取模块301还配置为在拍摄图片或视频时,获取当前拍摄环境的环境特征信息。环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。拍摄地点信息可以通过移动终端(如手机)自带的GPS功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。
其中,所述获取模块301包括:获取子模块3011和确定子模块3012;所述获取子模块3011,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;所述确定子模块3012,配置为按照预设规则确定与所述特征识别结果匹配的特征信息;这里,所述特征识别包括:生物特征识别 和环境特征识别。
所述确定子模块3012,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何特征图的中心点等数据。
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。
Figure PCTCN2017082912-appb-000003
表1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征存储到数据库 中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。
其中,还包括第二提取模块304,用于按照预设方式从压缩存储的视频文件中提取出目标帧。
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。
获取模块301在获取到图片或视频的图像特征信息后,对应关系建立模块302建立图像特征信息与图片或视频的对应关系,保存模块303将图像特征信息、图片或视频及对应关系保存。
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应 关系,对应关系建立模块302还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,保存模块303将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式保存到保存模块303中,同时用户可以选择把日志上传云端,云端的数据库和保存模块303保持一致。日志默认的特征信息通过list链表数据结构进行保存。
例如,获取模块301获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。对应关系建立模块302把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在保存模块303中,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管理。
根据本公开实施例提供的一种信息处理装置,该信息处理装置包括:获取模块301获取目标文件的特征信息,目标文件包括图片或视频,获取目标文件的特征信息至少包括从图片或视频的图像中解析出图像特征信息;对应关系建立模块302建立特征信息与目标文件的对应关系,保存模块303将特征信息、目标文件及对应关系保存;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。
实施例四
本实施例提供一种信息处理装置,所述信息处理装置的组成结构如图4,包括:
获取模块301,配置为获取目标文件的特征信息;
其中,所述目标文件包括图片或视频。
对应关系建立模块302,配置为建立所述特征信息与所述目标文件的对应关系。
保存模块303,配置为基于所述特征信息及所述对应关系对所述目标文件进行存储。
所述特征信息至少包括图像特征信息;所述图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。所述特征信息还可以包括环境特征信息。获取模块301还配置为在拍摄图片或视频时,获取当前拍摄环境的环境特征信息。环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。拍摄地点信息可以通过移动终端(如手机)自带的GPS功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。
接收模块304,配置为接收调用请求,调用请求中携带目标文件的特征信息。
查找模块305,配置为在所述对应关系中查找所述特征信息对应的目标文件。
调用模块306,配置为对所述目标文件进行调用。
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志 的形式进行保存,日志默认的特征信息通过list链表数据结构进行保存。后续可以通过list链表、根据日志的特征信息对日志进行检索、查询,同时用户也能根据自身需要增加特征信息,并对这些特征信息进行编辑、修改。
例如可以通过时间、地点、心情等属性来查询日志、照片或视频。如:根据时间段2016.6.1-2016.6.30、旅游、心情愉快以及后续自行添加的关键词查询到对应的日志、照片或视频。由于对日志、照片或视频还提供了编辑、修改、查询、检索等功能,方便了都市白领记录生活状态、方便了忙碌的妈妈记录孩子成长。
调用模块306,用于对查找模块305查找出的目标文件进行调用;
在一实施例中,可以对查找出的目标文件进行显示、发送给朋友或家人、上传至QQ空间、微博、微信朋友圈。
其中,所述获取模块301包括:获取子模块3011和确定子模块3012;所述获取子模块3011,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;所述确定子模块3012,配置为按照预设规则确定与所述特征识别结果匹配的特征信息;这里,所述特征识别包括:生物特征识别和环境特征识别。
所述确定子模块3012,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何 特征图的中心点等数据。
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。
Figure PCTCN2017082912-appb-000004
表1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征存储到数据库中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。
其中,还包括第二提取模块304,用于按照预设方式从压缩存储的视频文件中提取出目标帧。
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通 常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。
获取模块301在获取到图片或视频的图像特征信息后,对应关系建立模块302建立图像特征信息与图片或视频的对应关系,保存模块303将图像特征信息、图片或视频及对应关系保存。
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应关系,对应关系建立模块302还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,保存模块303将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式保存到保存模块303中,同时用户可以选择把日志上传云端,云端的数据库和保存模块303保持一致。日志默认的特征信息通过list链表数据结构进行保存。
例如,获取模块301获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。对应关系建立 模块302把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在保存模块303中,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管理。
在另一实施例中还提供一种计算机存储介质,计算机存储介质中存储有计算机可执行指令,计算机可执行指令用于执行上述的信息处理方法。
本公开实施例中提出的信息处理装置中的获取模块301、对应关系建立模块302、保存模块303、接收模块304、查找模块305和调用模块306都可以通过处理器来实现,当然也可通过具体的逻辑电路实现;其中所述处理器可以是位于电子设备上的服务器,在实际应用中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
显然,本领域的技术人员应该明白,上述本公开实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储介质(ROM/RAM、磁碟、光盘)中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。所以,本公开不限制于任何特定的硬件和软件结合。
以上内容是结合具体的实施方式对本公开实施例所作的进一步详细说明,不能认定本公开的具体实施只局限于这些说明。对于本公开所属技术 领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本公开的保护范围。
工业实用性
本公开实施例提供的信息处理方法,获取目标文件的特征信息;建立所述特征信息与所述目标文件的对应关系;基于所述特征信息及所述对应关系对所述目标文件进行分类存储;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求

Claims (11)

  1. 一种信息处理方法,所述方法包括:
    获取目标文件的特征信息;
    建立所述特征信息与所述目标文件的对应关系;
    基于所述特征信息及所述对应关系对所述目标文件进行分类存储。
  2. 如权利要求1所述的信息处理方法,其中,所述获取目标文件的特征信息,包括:
    对所述目标文件中的图像进行特征识别,得到特征识别结果;
    按照预设规则确定与所述特征识别结果匹配的特征信息。
  3. 如权利要求2所述的信息处理方法,其中,所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:
    在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;
    根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
  4. 如权利要求2所述的信息处理方法,其中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
  5. 如权利要求1所述的信息处理方法,其中,所述方法还包括:
    接收调用请求信息,所述调用请求信息中携带目标文件的特征信息;
    在所述对应关系中查找与所述特征信息对应的目标文件;
    对所述目标文件进行调用。
  6. 一种信息处理装置,包括:
    获取模块,配置为获取目标文件的特征信息;
    对应关系建立模块,配置为建立所述特征信息与所述目标文件的对 应关系;
    保存模块,配置为基于所述特征信息及所述对应关系对所述目标文件进行分类存储。
  7. 如权利要求6所述的信息处理装置,其中,所述获取模块包括:
    第一提取子模块,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;
    确定子模块,配置为按照预设规则确定与所述特征识别结果匹配的特征信息。
  8. 如权利要求7所述的信息处理装置,其中,所述确定子模块,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;
    根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。
  9. 如权利要求7所述的信息处理装置,其中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。
  10. 如权利要求6所述的信息处理装置,其中,所述装置还包括:
    接收模块,配置为接收调用请求,所述调用请求中携带目标文件的特征信息;
    查找模块,配置为在所述对应关系中查找所述特征信息对应的目标文件;
    调用模块,配置为对所述目标文件进行调用。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至5任一项所述的路由信息处理方法。
PCT/CN2017/082912 2016-08-09 2017-05-03 一种信息处理方法、装置及存储介质 WO2018028253A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610647621.8A CN107704471A (zh) 2016-08-09 2016-08-09 一种信息处理方法和装置及文件调用方法和装置
CN201610647621.8 2016-08-09

Publications (1)

Publication Number Publication Date
WO2018028253A1 true WO2018028253A1 (zh) 2018-02-15

Family

ID=61162627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/082912 WO2018028253A1 (zh) 2016-08-09 2017-05-03 一种信息处理方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN107704471A (zh)
WO (1) WO2018028253A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472098A (zh) * 2019-08-20 2019-11-19 北京达佳互联信息技术有限公司 视频内容主题的确定方法、装置、电子设备及存储介质
CN113660482A (zh) * 2021-07-28 2021-11-16 上海立可芯半导体科技有限公司 一种ai摄像设备或模块的自动化测试方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965588A (zh) * 2018-06-27 2018-12-07 维沃移动通信有限公司 一种信息提示方法及移动终端
CN111046814A (zh) * 2019-12-18 2020-04-21 维沃移动通信有限公司 图像处理方法及电子设备
CN111221782B (zh) * 2020-01-17 2024-04-09 惠州Tcl移动通信有限公司 一种文件查找方法、装置、存储介质及移动终端

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744086A (zh) * 2004-09-01 2006-03-08 松下电器产业株式会社 图像文件处理方法及其相关技术
CN101055592A (zh) * 2007-05-20 2007-10-17 宁尚国 图像信息生成方法及装置
CN103226575A (zh) * 2013-04-01 2013-07-31 北京小米科技有限责任公司 一种图像处理方法和装置
CN105243084A (zh) * 2015-09-07 2016-01-13 广东欧珀移动通信有限公司 拍摄图像文件的保存方法和系统及其检索方法和系统
CN105787131A (zh) * 2016-03-31 2016-07-20 宇龙计算机通信科技(深圳)有限公司 信息处理方法、装置和移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744086A (zh) * 2004-09-01 2006-03-08 松下电器产业株式会社 图像文件处理方法及其相关技术
CN101055592A (zh) * 2007-05-20 2007-10-17 宁尚国 图像信息生成方法及装置
CN103226575A (zh) * 2013-04-01 2013-07-31 北京小米科技有限责任公司 一种图像处理方法和装置
CN105243084A (zh) * 2015-09-07 2016-01-13 广东欧珀移动通信有限公司 拍摄图像文件的保存方法和系统及其检索方法和系统
CN105787131A (zh) * 2016-03-31 2016-07-20 宇龙计算机通信科技(深圳)有限公司 信息处理方法、装置和移动终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472098A (zh) * 2019-08-20 2019-11-19 北京达佳互联信息技术有限公司 视频内容主题的确定方法、装置、电子设备及存储介质
CN113660482A (zh) * 2021-07-28 2021-11-16 上海立可芯半导体科技有限公司 一种ai摄像设备或模块的自动化测试方法及装置

Also Published As

Publication number Publication date
CN107704471A (zh) 2018-02-16

Similar Documents

Publication Publication Date Title
WO2018028253A1 (zh) 一种信息处理方法、装置及存储介质
US10885100B2 (en) Thumbnail-based image sharing method and terminal
US10013600B2 (en) Digital image processing method and apparatus, and storage medium
US9830727B2 (en) Personalizing image capture
US8810684B2 (en) Tagging images in a mobile communications device using a contacts list
CN104572732A (zh) 查询用户标识的方法及装置、获取用户标识的方法及装置
WO2016127478A1 (zh) 一种图像处理方法、装置和终端
CN105654039B (zh) 图像处理的方法和装置
WO2019153504A1 (zh) 一种群组创建的方法及其终端
WO2017054442A1 (zh) 一种图像信息识别处理方法及装置、计算机存储介质
TW201508520A (zh) 設置背景圖像的方法及相關的伺服器和系統
WO2019218459A1 (zh) 一种照片存储方法、存储介质、服务器和装置
WO2021115483A1 (zh) 一种图像处理方法及相关装置
US11917158B2 (en) Static video recognition
WO2021104097A1 (zh) 表情包生成方法、装置及终端设备
CN105979363A (zh) 一种身份识别法和装置
WO2017067485A1 (zh) 一种图片管理方法、装置及一种终端
US20170192965A1 (en) Method and apparatus for smart album generation
WO2017101323A1 (zh) 图像采集、信息推送方法、装置及手机
US20190082002A1 (en) Media file sharing method, media file sharing device, and terminal
WO2015196681A1 (zh) 一种图片处理方法及电子设备
CN111316628A (zh) 一种基于智能终端的图像拍摄方法及图像拍摄系统
KR101715708B1 (ko) 이미지 분석기반의 자동화된 관계형 태그 생성 시스템과 이를 이용한 서비스 제공방법
US20170200062A1 (en) Method of determination of stable zones within an image stream, and portable device for implementing the method
US20170171462A1 (en) Image Collection Method, Information Push Method and Electronic Device, and Mobile Phone

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17838385

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17838385

Country of ref document: EP

Kind code of ref document: A1