WO2018028253A1 - Information processing method, apparatus and storage medium - Google Patents

Information processing method, apparatus and storage medium Download PDF

Info

Publication number
WO2018028253A1
WO2018028253A1 PCT/CN2017/082912 CN2017082912W WO2018028253A1 WO 2018028253 A1 WO2018028253 A1 WO 2018028253A1 CN 2017082912 W CN2017082912 W CN 2017082912W WO 2018028253 A1 WO2018028253 A1 WO 2018028253A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
feature
target file
feature information
video
Prior art date
Application number
PCT/CN2017/082912
Other languages
French (fr)
Chinese (zh)
Inventor
肖非
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018028253A1 publication Critical patent/WO2018028253A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Definitions

  • the present disclosure relates to the field of communications, and in particular, to an information processing method, apparatus, and storage medium.
  • the technical problem to be solved by the embodiments of the present disclosure is to provide an information processing method, device, and storage medium.
  • a picture or video is stored, it is inconvenient for the user to search and browse, and the user experience is poor. Can not meet the needs of users.
  • an embodiment of the present disclosure provides an information processing method, including:
  • the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information matching the feature recognition result according to a preset rule.
  • the determining the feature information that matches the feature recognition result according to the preset rule includes: searching for the feature recognition result in the preset biometric information template set, and obtaining the matching with the feature recognition result.
  • the biometric information template is configured to find feature information corresponding to the biometric information template according to a correspondence between the preset biometric information template and the feature information.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the method further includes:
  • an information processing apparatus including:
  • a correspondence establishing module configured to establish a correspondence between the feature information and the target file
  • the saving module is configured to classify and store the target file based on the feature information and the correspondence.
  • the obtaining module includes:
  • a first extraction submodule configured to perform feature recognition on an image in the target file to obtain a feature recognition result
  • Determining a submodule configured to determine, according to a preset rule, a match with the feature recognition result Feature information.
  • the determining submodule is configured to search for the feature recognition result in a preset biometric information template set, and obtain a biometric information template that matches the feature recognition result;
  • the feature information corresponding to the biometric information template is found according to a correspondence between the preset biometric information template and the feature information.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the device further includes:
  • a receiving module configured to receive a call request, where the call request carries feature information of the target file
  • a search module configured to search for a target file corresponding to the feature information in the correspondence relationship
  • an embodiment of the present disclosure further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the information processing method.
  • An advantageous aspect of the present disclosure is: an information processing method, apparatus, and storage medium according to an embodiment of the present disclosure, the information processing method includes acquiring feature information of an object file, where the target file includes a picture or a video; correspondingly, acquiring The feature information of the target file includes at least parsing the image feature information from the image of the image or the video; establishing a correspondence between the feature information and the target file, and correspondingly the feature information, the target file, and the corresponding Relationship preservation; using the above scheme, when storing pictures or videos, the pictures or videos are classified and stored according to the feature information of the pictures or videos, which facilitates the user to quickly find and browse, and improves the use. User experience, better meet the needs of users.
  • FIG. 1 is a schematic flowchart of processing of an information processing method according to Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic flowchart of processing of an information processing method according to Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic structural diagram of a structure of an information processing apparatus according to Embodiment 3 of the present disclosure
  • FIG. 4 is a schematic structural diagram of an information processing apparatus according to Embodiment 4 of the present disclosure.
  • the first embodiment provides an information processing method.
  • the processing flow of the information processing method includes the following steps:
  • Step S101 acquiring feature information of the target file
  • the target file includes: a picture or a video
  • the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information that matches the feature recognition result according to a preset rule;
  • the determining, according to the preset rule, the feature information that matches the feature recognition result includes: searching for the feature recognition result in the preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result. And searching for the feature information corresponding to the biometric information template according to the correspondence between the preset biometric information template and the feature information.
  • the feature recognition includes: biometric recognition and environmental feature recognition; the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information. .
  • the feature information includes at least image feature information
  • the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
  • the feature information may further include environment feature information.
  • the acquiring the feature information of the target file may further include: acquiring environment feature information of the current shooting environment when the picture or video is captured; and the environment feature information includes: shooting time information At least one of the shooting location information and the shooting weather information; the shooting location information can be obtained by a Global Positioning System (GPS) function of the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • GPS Global Positioning System
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the parsing the image feature information from the image of the image or the video comprises: performing face recognition on the image in the image or the image of the target frame in the video, and performing biometric information extraction on the recognized face image;
  • the biometric information generates image feature information according to a preset rule.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the biometric information in Table 1 The content of the template is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information, including: left and right corner feature points and upper and lower lip feature points, each feature point has a position Coordinate values, according to these coordinate values, the height difference of the mouth angle up or down and the feature points of the upper and lower lips can be calculated.
  • facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored. Go to the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes: extracting the target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the previous and subsequent frames are parsed, and the calculation amount is large. Therefore, in this embodiment, only the I frame is parsed. Compressed video is inserted in the fixed interval to ensure that the B and P frame images are restored without affecting the subsequent restored video due to the noise introduced by the preceding and succeeding frames.
  • I frame of the entire picture information the maximum interval is 300 frames (usually the frame rate is greater than 30 frames / sec).
  • the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
  • Step S102 Establish a correspondence between the feature information and the target file, and classify and store the target file based on the feature information and the correspondence relationship;
  • the correspondence between the image feature information and the picture or video is established.
  • the correspondence between the shooting time information and/or the shooting location information and/or the shooting weather information may be established, and the shooting time information and / or shooting location information and / or shooting weather information, pictures or videos and correspondence saved.
  • the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image may also be generated by generating a log, which may be saved to the mobile terminal.
  • a log which may be saved to the mobile terminal.
  • the user can choose to upload the log to the cloud, and the cloud database and the mobile terminal are consistently localized as a local backup of the mobile terminal.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information by automatically generating text or expression packets and forming a log, which is convenient for the user to manage. Reason.
  • the information processing method includes acquiring feature information of an object file, where the target file includes a picture or a video, and acquiring the feature information of the target file includes at least parsing the image from the image of the image or the video.
  • Feature information establishing a correspondence relationship between the feature information and the target file, and saving the feature information, the object file, and the correspondence relationship; using the above scheme, when storing the image or the video, performing the image or video according to the feature information of the image or the video Classified storage makes it easy for users to quickly find and browse, which improves the user experience and better meets user needs.
  • the second embodiment provides an information processing method.
  • the processing flow of the information processing method is as shown in FIG. 2, and includes the following steps:
  • Step S201 acquiring feature information of the target file
  • the target file includes: a picture or a video
  • the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information that matches the feature recognition result according to a preset rule;
  • the determining, according to the preset rule, the feature information that matches the feature recognition result includes: searching for the feature recognition result in the preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result. And searching for the feature information corresponding to the biometric information template according to the correspondence between the preset biometric information template and the feature information.
  • the feature recognition includes: biometric recognition and environmental feature recognition.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the feature information includes at least image feature information, and the image feature information may include a map
  • the expression information of a person in a movie or a video image can reflect the mood of the character from the expression information, for example, the smile indicates that the mood is happy.
  • the feature information may further include environment feature information.
  • the acquiring the feature information of the target file may further include: acquiring environment feature information of the current shooting environment when the picture or video is captured; and the environment feature information includes: shooting time information At least one of the shooting location information and the shooting weather information; the shooting location information can be obtained by a Global Positioning System (GPS) function of the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • GPS Global Positioning System
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the parsing the image feature information from the image of the image or the video comprises: performing face recognition on the image in the image or the image of the target frame in the video, and performing biometric information extraction on the recognized face image;
  • the biometric information generates image feature information according to a preset rule.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the content of the biometric information template in Table 1 is simplified, and the actual biometric information template adopts a facial expression feature letter.
  • the geometrical feature information of the mouth is divided into: the left and right corner feature points and the upper and lower lip feature points. Each feature point has a position coordinate value, and the mouth angle can be calculated according to the coordinate values, and the upper and lower lips can be calculated. The height difference of the feature points.
  • facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored to In the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes: extracting the target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the compressed video will be inserted into the I frame with the complete picture information in the fixed interval.
  • the maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second).
  • Step S202 Establish a correspondence between the feature information and the target file, and store the target file based on the feature information and the corresponding relationship;
  • the correspondence between the image feature information and the picture or video is established.
  • the correspondence between the shooting time information and/or the shooting location information and/or the shooting weather information may be established, and the shooting time information and / or shooting location information and / or shooting weather information, pictures or videos and correspondence saved.
  • the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image may also be generated by generating a log, which may be saved to the mobile terminal.
  • a log which may be saved to the mobile terminal.
  • the user can choose to upload the log to the cloud, and the cloud database and the mobile terminal are consistently localized as a local backup of the mobile terminal.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
  • Step S203 Receive a call request, and carry the feature information of the target file in the call request;
  • the object file includes a picture or a video
  • the feature information includes at least image feature information.
  • the feature information of the target file includes at least image feature information parsed from the image of the picture or the video, and the image feature information may include expression information of the person in the picture or the video image, such as laughing, angry, crying, etc., which may be reflected from the expression information.
  • the mood of the character such as laughing, indicates a happy mood.
  • the feature information may further include at least one of shooting time information, shooting location information, and shooting weather information when the picture or video is captured.
  • Step S204 Find an object file corresponding to the feature information in the correspondence relationship
  • the log may be generated in the form of a log, and the log may be saved.
  • the default feature information is saved by the list linked list data structure. Subsequently, the log can be retrieved and queried according to the log information of the log, and the user can also add feature information according to his own needs, and edit and modify the feature information.
  • logs, photos, or videos by attributes such as time, location, and mood. For example, according to the time period 2016.6.1-2016.6.30, travel, happy mood and follow-up self-added keywords to query the corresponding log, photo or video. Because the log, photo or video also provides editing, modification, query, retrieval and other functions, it is convenient for urban white-collar workers to record the living status, and it is convenient for busy mothers to record the growth of children.
  • Step S205 calling the found target file
  • the found target file may be displayed, sent to a friend or family member, uploaded to a QQ space, a microblog, or a WeChat circle of friends.
  • the file invoking method includes receiving an invoking request, and carrying the feature information of the object file in the invoking request; and according to the feature information and the correspondence relationship saved in the information processing method according to the first embodiment, Find the target text corresponding to the feature information Calling the found target file; using the above scheme, when the image or video is stored, the image or video is stored according to the feature information of the image or the video, which facilitates the user to quickly find and browse, and improves the user. Experience, better meet the needs of users.
  • the embodiment provides an information processing device.
  • the composition of the information processing device as shown in FIG. 3, includes:
  • the obtaining module 301 is configured to acquire feature information of the target file.
  • the target file includes a picture or a video.
  • the correspondence establishing module 302 is configured to establish a correspondence between the feature information and the target file.
  • the saving module 303 is configured to store the target file based on the feature information and the correspondence.
  • the feature information includes at least image feature information; the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
  • the feature information may also include environmental feature information.
  • the obtaining module 301 is further configured to acquire environmental feature information of the current shooting environment when taking a picture or video.
  • the environmental feature information includes at least one of shooting time information, shooting location information, and shooting weather information.
  • the shooting location information can be obtained by the GPS function provided by the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the obtaining module 301 includes: an obtaining sub-module 3011 and a determining sub-module 3012.
  • the obtaining sub-module 3011 is configured to perform feature recognition on an image in the target file to obtain a feature recognition result;
  • the determining sub-module 3012 configured to determine, according to a preset rule, feature information that matches the feature recognition result; where the feature recognition includes: biometric identification And environmental feature recognition.
  • the determining sub-module 3012 is configured to search for the feature recognition result in the preset biometric information template set, and obtain a biometric information template that matches the feature recognition result; according to the preset biometric information template and feature Corresponding relationships between the information, and finding feature information corresponding to the biometric information template.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the content of the biometric information template in Table 1 is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information. Including: left and right corner feature points and upper and lower lip feature points, each feature point has a position coordinate value, according to which the height difference of the mouth angle up or down and the upper and lower lip feature points can be calculated.
  • Different facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance.
  • the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored in the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes a second extraction module 304, configured to extract a target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the compressed video will be inserted into the I frame with the complete picture information in the fixed interval.
  • the maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second).
  • the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
  • the obtaining module 301 After acquiring the image feature information of the picture or video, the obtaining module 301 establishes a correspondence between the image feature information and the picture or video, and the saving module 303 saves the image feature information, the picture or the video, and the corresponding relationship.
  • the correspondence relationship establishing module 302 can also establish a correspondence relationship between the shooting time information and/or the shooting location information and/or the shooting weather information, and the saving module 303 will take the shooting time information and/or the shooting location information and/or the shooting weather information. , pictures or videos and correspondences are saved.
  • the log may be saved in the save module 303, and the user may The log is uploaded to the cloud, and the cloud database and the save module 303 are consistent.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the acquisition module 301 acquires the time, location, weather, and mood information of the person in the picture when the picture is taken. Such as: July 10, Shanghai, sunny, happy.
  • the correspondence relationship establishing module 302 associates the captured video or image with the corresponding time, location, weather, mood, and the like, and may be saved in the save module 303 in the form of a log, and may also add information as needed, such as: Zoo, travel, etc. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
  • An information processing apparatus includes: the acquisition module 301 acquires feature information of an object file, the object file includes a picture or a video, and the feature information of the acquired object file includes at least an image from the picture or video.
  • the image feature information is parsed; the correspondence relationship establishing module 302 establishes a correspondence relationship between the feature information and the target file, and the saving module 303 saves the feature information, the object file, and the corresponding relationship; and when the image or video is stored, according to the above scheme,
  • the feature information of the picture or video is classified and stored in the picture or video, which facilitates the user to quickly find and browse, improves the user experience, and better meets the user's needs.
  • the embodiment provides an information processing device.
  • the composition of the information processing device is as shown in FIG. 4, and includes:
  • the obtaining module 301 is configured to acquire feature information of the target file.
  • the target file includes a picture or a video.
  • the correspondence establishing module 302 is configured to establish a correspondence between the feature information and the target file.
  • the saving module 303 is configured to store the target file based on the feature information and the correspondence.
  • the feature information includes at least image feature information; the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
  • the feature information may also include environmental feature information.
  • the obtaining module 301 is further configured to acquire environmental feature information of the current shooting environment when taking a picture or video.
  • the environmental feature information includes at least one of shooting time information, shooting location information, and shooting weather information.
  • the shooting location information can be obtained by the GPS function provided by the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information.
  • the weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
  • the receiving module 304 is configured to receive a call request, and the feature information of the target file is carried in the call request.
  • the searching module 305 is configured to search for the target file corresponding to the feature information in the correspondence relationship.
  • the calling module 306 is configured to make a call to the target file.
  • logs, photos, or videos by attributes such as time, location, and mood. For example, according to the time period 2016.6.1-2016.6.30, travel, happy mood and follow-up self-added keywords to query the corresponding log, photo or video. Because the log, photo or video also provides editing, modification, query, retrieval and other functions, it is convenient for urban white-collar workers to record the living status, and it is convenient for busy mothers to record the growth of children.
  • the calling module 306 is configured to invoke the target file found by the searching module 305;
  • the found target file may be displayed, sent to a friend or family member, uploaded to a QQ space, a microblog, or a WeChat circle of friends.
  • the obtaining module 301 includes: an obtaining sub-module 3011 and a determining sub-module 3012.
  • the obtaining sub-module 3011 is configured to perform feature recognition on an image in the target file to obtain a feature recognition result; the determining sub-module 3012.
  • the method is configured to determine, according to a preset rule, feature information that matches the feature recognition result; where the feature recognition includes: biometric recognition and environmental feature recognition.
  • the determining sub-module 3012 is configured to search for the feature recognition result in the preset biometric information template set, and obtain a biometric information template that matches the feature recognition result; according to the preset biometric information template and feature Corresponding relationships between the information, and finding feature information corresponding to the biometric information template.
  • the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
  • the face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition.
  • the extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right. Mouth corner feature points and upper and lower lip feature points, geometry Data such as the center point of the feature map.
  • the correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table.
  • the correspondence table may be as shown in Table 1.
  • the content of the biometric information template in Table 1 is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information. Including: left and right corner feature points and upper and lower lip feature points, each feature point has a position coordinate value, according to which the height difference of the mouth angle up or down and the upper and lower lip feature points can be calculated.
  • facial expression feature information the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored to In the database.
  • the geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current.
  • the method further includes a second extraction module 304, configured to extract a target frame from the compressed stored video file according to a preset manner.
  • the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information. It is often necessary to calculate the image based on the previous and subsequent frames to restore the image.
  • the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved.
  • the I frame includes complete image information
  • the B frame is the difference between the current frame and the previous frame information
  • the P frame is the difference between the current frame and the previous frame information.
  • the compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved.
  • the compressed video will be inserted into the I frame with the complete picture information in the fixed interval.
  • the maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second).
  • the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
  • the obtaining module 301 After acquiring the image feature information of the picture or video, the obtaining module 301 establishes a correspondence between the image feature information and the picture or video, and the saving module 303 saves the image feature information, the picture or the video, and the corresponding relationship.
  • the correspondence relationship establishing module 302 can also establish a correspondence relationship between the shooting time information and/or the shooting location information and/or the shooting weather information, and save The module 303 saves the shooting time information and/or shooting location information and/or shooting weather information, pictures or videos, and correspondences.
  • the log may be saved in the save module 303, and the user may The log is uploaded to the cloud, and the cloud database and the save module 303 are consistent.
  • the default feature information of the log is saved by the data structure of the list linked list.
  • the acquisition module 301 acquires the time, location, weather, and mood information of the person in the picture when the picture is taken. Such as: July 10, Shanghai, sunny, happy.
  • Correspondence relationship establishment The module 302 associates the captured video or image with the corresponding time, location, weather, mood, and the like, and may be saved in the save module 303 in the form of a log, and may also add information as needed, such as: safari, travel Wait. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud.
  • the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
  • a computer storage medium is stored, the computer storage medium storing computer executable instructions for executing the information processing method described above.
  • the obtaining module 301, the correspondence establishing module 302, the saving module 303, the receiving module 304, the searching module 305, and the calling module 306 in the information processing apparatus proposed in the embodiment of the present disclosure may all be implemented by a processor, and may also be specifically The logic circuit is implemented; wherein the processor may be a server located on an electronic device, and in practical applications, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or Field programmable gate array (FPGA), etc.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital signal processor
  • FPGA Field programmable gate array
  • modules or steps of the above embodiments of the present disclosure may be implemented by a general computing device, which may be concentrated on a single computing device or distributed among multiple computing devices.
  • they may be implemented by program code executable by the computing device, such that they may be stored in a storage medium (ROM/RAM, disk, optical disk) by a computing device, and in some
  • the steps shown or described may be performed in an order different from that herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of the modules or steps may be implemented as a single integrated circuit module. Therefore, the present disclosure is not limited to any specific combination of hardware and software.
  • the information processing method provided by the embodiment of the present disclosure acquires feature information of the target file, establishes a correspondence between the feature information and the target file, and classifies and stores the target file based on the feature information and the correspondence relationship;
  • the above solution is used to store pictures or videos according to the feature information of the pictures or videos, so that the users can quickly find and browse, which improves the user experience and better meets the user's needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

An information processing method, apparatus and storage medium, the information processing method comprising: acquiring feature information of a target file; establishing a correspondence between the feature information and the target file; and classifying and storing the target file on the basis of the feature information and the correspondence. Using said solution, a picture or a video is stored according to feature information of the picture or video, which aids a user in quickly finding and browsing, improves user experience, and better fulfills user needs.

Description

一种信息处理方法、装置及存储介质Information processing method, device and storage medium
相关申请的交叉引用Cross-reference to related applications
本申请基于申请号为201610647621.8.X、申请日为2016年08月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。The present application is based on a Chinese patent application filed on Apr. 10, 2016, the filing date of which is hereby incorporated by reference.
技术领域Technical field
本公开涉及通信领域,尤其涉及一种信息处理方法、装置及存储介质。The present disclosure relates to the field of communications, and in particular, to an information processing method, apparatus, and storage medium.
背景技术Background technique
随着智能手机的普及,对于智能手机照相、拍摄视频等功能的使用越来越多,甚至已经逐渐取代了数码相机。由于智能手机的便携性,越来越多人习惯用智能手机记录生活、工作中的点点滴滴。但是现有的在对拍摄的图片或视频进行存储时,都是按照拍摄的时间顺序进行存储的,不方便用户进行查找和浏览,用户体验较差,不能满足用户需求。With the popularity of smart phones, the use of smart phone photography, video capture and other functions is increasing, and even has gradually replaced digital cameras. Due to the portability of smartphones, more and more people are accustomed to using smartphones to record bits and pieces of life and work. However, when the existing pictures or videos are stored, they are stored in the order of shooting time, which is inconvenient for the user to search and browse, and the user experience is poor, which cannot meet the user's needs.
发明内容Summary of the invention
本公开实施例主要解决的技术问题是,提供一种信息处理方法、装置及存储介质,解决现有技术中,对图片或视频进行存储时,不方便用户进行查找和浏览,用户体验较差,不能满足用户需求的问题。The technical problem to be solved by the embodiments of the present disclosure is to provide an information processing method, device, and storage medium. In the prior art, when a picture or video is stored, it is inconvenient for the user to search and browse, and the user experience is poor. Can not meet the needs of users.
为解决上述技术问题,本公开实施例提供一种信息处理方法,包括:To solve the above technical problem, an embodiment of the present disclosure provides an information processing method, including:
获取目标文件的特征信息;Obtaining feature information of the target file;
建立所述特征信息与所述目标文件的对应关系;Establishing a correspondence between the feature information and the target file;
基于所述特征信息及所述对应关系对所述目标文件进行分类存储。 And classifying and storing the target file based on the feature information and the correspondence.
上述方案中,所述获取目标文件的特征信息,包括:对所述目标文件中的图像进行特征识别,得到特征识别结果;按照预设规则确定与所述特征识别结果匹配的特征信息。In the above solution, the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information matching the feature recognition result according to a preset rule.
上述方案中,所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。In the above solution, the determining the feature information that matches the feature recognition result according to the preset rule includes: searching for the feature recognition result in the preset biometric information template set, and obtaining the matching with the feature recognition result. The biometric information template is configured to find feature information corresponding to the biometric information template according to a correspondence between the preset biometric information template and the feature information.
上述方案中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。In the above solution, the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
上述方案中,所述方法还包括:In the above solution, the method further includes:
接收调用请求信息,所述调用请求信息中携带目标文件的特征信息;Receiving call request information, where the call request information carries feature information of the target file;
在所述对应关系中查找与所述特征信息对应的目标文件;Finding an object file corresponding to the feature information in the correspondence relationship;
调用所述目标文件。Call the target file.
为解决上述技术问题,本公开实施例提供一种信息处理装置,包括:In order to solve the above technical problem, an embodiment of the present disclosure provides an information processing apparatus, including:
获取模块,配置为获取目标文件的特征信息;Obtaining a module, configured to obtain feature information of the target file;
对应关系建立模块,配置为建立所述特征信息与所述目标文件的对应关系;a correspondence establishing module, configured to establish a correspondence between the feature information and the target file;
保存模块,配置为基于所述特征信息及所述对应关系对所述目标文件进行分类存储。The saving module is configured to classify and store the target file based on the feature information and the correspondence.
上述方案中,所述获取模块包括:In the above solution, the obtaining module includes:
第一提取子模块,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;a first extraction submodule configured to perform feature recognition on an image in the target file to obtain a feature recognition result;
确定子模块,配置为按照预设规则确定与所述特征识别结果匹配的 特征信息。Determining a submodule configured to determine, according to a preset rule, a match with the feature recognition result Feature information.
上述方案中,所述确定子模块,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;In the above solution, the determining submodule is configured to search for the feature recognition result in a preset biometric information template set, and obtain a biometric information template that matches the feature recognition result;
根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The feature information corresponding to the biometric information template is found according to a correspondence between the preset biometric information template and the feature information.
上述方案中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。In the above solution, the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
上述方案中,所述装置还包括:In the above solution, the device further includes:
接收模块,配置为接收调用请求,所述调用请求中携带目标文件的特征信息;a receiving module, configured to receive a call request, where the call request carries feature information of the target file;
查找模块,配置为在所述对应关系中查找所述特征信息对应的目标文件;a search module, configured to search for a target file corresponding to the feature information in the correspondence relationship;
调用模块,配置为对所述目标文件进行调用。Calling a module configured to make a call to the target file.
为解决上述技术问题,本公开实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,计算机可执行指令用于执行上述的信息处理方法。In order to solve the above technical problem, an embodiment of the present disclosure further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the information processing method.
本公开的有益效果是:根据本公开实施例提供的一种信息处理方法、装置及存储介质,该信息处理方法包括获取目标文件的特征信息,所述目标文件包括图片或视频;相应的,获取目标文件的特征信息至少包括从图片或视频的图像中解析出图像特征信息;建立所述特征信息与所述目标文件的对应关系,并将所述特征信息、所述目标文件及二者的对应关系保存;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用 户体验,更好的满足了用户需求。An advantageous aspect of the present disclosure is: an information processing method, apparatus, and storage medium according to an embodiment of the present disclosure, the information processing method includes acquiring feature information of an object file, where the target file includes a picture or a video; correspondingly, acquiring The feature information of the target file includes at least parsing the image feature information from the image of the image or the video; establishing a correspondence between the feature information and the target file, and correspondingly the feature information, the target file, and the corresponding Relationship preservation; using the above scheme, when storing pictures or videos, the pictures or videos are classified and stored according to the feature information of the pictures or videos, which facilitates the user to quickly find and browse, and improves the use. User experience, better meet the needs of users.
附图说明DRAWINGS
图1为本公开实施例一提供的一种信息处理方法的处理流程示意图;1 is a schematic flowchart of processing of an information processing method according to Embodiment 1 of the present disclosure;
图2为本公开实施例二提供的一种信息处理方法的处理流程示意图;2 is a schematic flowchart of processing of an information processing method according to Embodiment 2 of the present disclosure;
图3为本公开实施例三提供的一种信息处理装置的组成结构示意图;3 is a schematic structural diagram of a structure of an information processing apparatus according to Embodiment 3 of the present disclosure;
图4为本公开实施例四提供的一种信息处理装置的组成结构示意图。FIG. 4 is a schematic structural diagram of an information processing apparatus according to Embodiment 4 of the present disclosure.
具体实施方式detailed description
下面通过具体实施方式结合附图对本公开实施例作进一步详细说明。The embodiments of the present disclosure will be further described in detail below with reference to the accompanying drawings.
实施例一Embodiment 1
本实施例一提供一种信息处理方法,所述信息处理方法的处理流程,如图1所示,包括以下步骤:The first embodiment provides an information processing method. The processing flow of the information processing method, as shown in FIG. 1 , includes the following steps:
步骤S101:获取目标文件的特征信息;Step S101: acquiring feature information of the target file;
这里,所述目标文件包括:图片或视频;Here, the target file includes: a picture or a video;
相应的,所述获取目标文件的特征信息至少包括:对所述目标文件中的图像进行特征识别,得到特征识别结果;按照预设规则确定与所述特征识别结果匹配的特征信息;Correspondingly, the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information that matches the feature recognition result according to a preset rule;
所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The determining, according to the preset rule, the feature information that matches the feature recognition result includes: searching for the feature recognition result in the preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result. And searching for the feature information corresponding to the biometric information template according to the correspondence between the preset biometric information template and the feature information.
其中,所述特征识别包括:生物特征识别和环境特征识别;所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。 The feature recognition includes: biometric recognition and environmental feature recognition; the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information. .
所述特征信息至少包括图像特征信息,所述图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。The feature information includes at least image feature information, and the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. .
所述特征信息还可以包括环境特征信息;相应的,所述获取目标文件的特征信息还可以包括:在拍摄图片或视频时,获取当前拍摄环境的环境特征信息;环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种;拍摄地点信息可以通过移动终端(如手机)自带的全球定位系统(Global Positioning System,GPS)功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。The feature information may further include environment feature information. Correspondingly, the acquiring the feature information of the target file may further include: acquiring environment feature information of the current shooting environment when the picture or video is captured; and the environment feature information includes: shooting time information At least one of the shooting location information and the shooting weather information; the shooting location information can be obtained by a Global Positioning System (GPS) function of the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information. . The weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
其中,从图片或视频的图像中解析出图像特征信息包括:对图片中的图像或视频中目标帧的图像进行人脸识别,并对识别出的人脸图像进行生物特征信息提取;根据提取出的生物特征信息按照预设规则生成图像特征信息。The parsing the image feature information from the image of the image or the video comprises: performing face recognition on the image in the image or the image of the target frame in the video, and performing biometric information extraction on the recognized face image; The biometric information generates image feature information according to a preset rule.
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何特征图的中心点等数据。The face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition. The extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right. The corner feature points and upper and lower lip feature points, the center point of the geometric feature map, and the like.
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。The correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table. For example, the correspondence table may be as shown in Table 1.
Figure PCTCN2017082912-appb-000001
Figure PCTCN2017082912-appb-000001
表1Table 1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息 模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征分类存储到数据库中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。It should be understood that in order to facilitate understanding of the solution of the embodiment, the biometric information in Table 1 The content of the template is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information, including: left and right corner feature points and upper and lower lip feature points, each feature point has a position Coordinate values, according to these coordinate values, the height difference of the mouth angle up or down and the feature points of the upper and lower lips can be calculated. Different facial expression feature information, the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored. Go to the database.
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。The geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current The expression of a person in a photo or video image, which is used as image feature information.
其中,对视频中目标帧的图像进行人脸识别之前,还包括:按照预设方式从压缩存储的视频文件中提取出目标帧。Before performing face recognition on the image of the target frame in the video, the method further includes: extracting the target frame from the compressed stored video file according to a preset manner.
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完 整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。Generally, in order to save storage space, the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames. In order to restore the picture. Generally speaking, the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved. Usually, the I frame includes complete image information, the B frame is the difference between the current frame and the previous frame information, and the P frame is the difference between the current frame and the previous frame information. The compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved. If the picture is to be obtained through the B and P frames, the previous and subsequent frames are parsed, and the calculation amount is large. Therefore, in this embodiment, only the I frame is parsed. Compressed video is inserted in the fixed interval to ensure that the B and P frame images are restored without affecting the subsequent restored video due to the noise introduced by the preceding and succeeding frames. I frame of the entire picture information, the maximum interval is 300 frames (usually the frame rate is greater than 30 frames / sec). In this embodiment, after the video header information is parsed, the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
步骤S102:建立特征信息与目标文件的对应关系,基于所述特征信息及所述对应关系对所述目标文件进行分类存储;Step S102: Establish a correspondence between the feature information and the target file, and classify and store the target file based on the feature information and the correspondence relationship;
这里,在获取到图片或视频的图像特征信息后,建立图像特征信息与图片或视频的对应关系。Here, after the image feature information of the picture or video is acquired, the correspondence between the image feature information and the picture or video is established.
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应关系,还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,并将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。For the same picture or video, in addition to establishing the correspondence between the image feature information and the picture or video, the correspondence between the shooting time information and/or the shooting location information and/or the shooting weather information may be established, and the shooting time information and / or shooting location information and / or shooting weather information, pictures or videos and correspondence saved.
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式进行保存,可以将其保存到移动终端(如手机)的数据库中,同时用户可以选择把日志上传云端,云端的数据库和移动终端本地保持一致,作为移动终端本地备份。日志默认的特征信息通过list链表数据结构进行保存。For the same picture or video, if the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image are established, it may also be generated by generating a log, which may be saved to the mobile terminal. In the database (such as mobile phone), the user can choose to upload the log to the cloud, and the cloud database and the mobile terminal are consistently localized as a local backup of the mobile terminal. The default feature information of the log is saved by the data structure of the list linked list.
例如,获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在本地数据库,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管 理。For example, get the time, location, weather, and mood information of the characters in the picture. Such as: July 10, Shanghai, sunny, happy. Corresponding relationship between the captured video or image and the corresponding time, place, weather, mood, etc., can be stored in the local database in the form of a log, and can also add information as needed, such as: safari, tourism, etc. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud. Through the above scheme, the captured video or image can be automatically generated in time, place, weather, mood and other information by automatically generating text or expression packets and forming a log, which is convenient for the user to manage. Reason.
根据本公开实施例提供的一种信息处理方法,该信息处理方法包括获取目标文件的特征信息,目标文件包括图片或视频,获取目标文件的特征信息至少包括从图片或视频的图像中解析出图像特征信息;建立特征信息与目标文件的对应关系,并将特征信息、目标文件及对应关系保存;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。An information processing method according to an embodiment of the present disclosure, the information processing method includes acquiring feature information of an object file, where the target file includes a picture or a video, and acquiring the feature information of the target file includes at least parsing the image from the image of the image or the video. Feature information; establishing a correspondence relationship between the feature information and the target file, and saving the feature information, the object file, and the correspondence relationship; using the above scheme, when storing the image or the video, performing the image or video according to the feature information of the image or the video Classified storage makes it easy for users to quickly find and browse, which improves the user experience and better meets user needs.
实施例二Embodiment 2
本实施例二提供一种信息处理方法,所述信息处理方法的处理流程如图2所示,包括以下步骤:The second embodiment provides an information processing method. The processing flow of the information processing method is as shown in FIG. 2, and includes the following steps:
步骤S201:获取目标文件的特征信息;Step S201: acquiring feature information of the target file;
这里,所述目标文件包括:图片或视频;Here, the target file includes: a picture or a video;
相应的,所述获取目标文件的特征信息至少包括:对所述目标文件中的图像进行特征识别,得到特征识别结果;按照预设规则确定与所述特征识别结果匹配的特征信息;Correspondingly, the acquiring the feature information of the target file includes: performing feature recognition on the image in the target file to obtain a feature recognition result; and determining feature information that matches the feature recognition result according to a preset rule;
所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The determining, according to the preset rule, the feature information that matches the feature recognition result includes: searching for the feature recognition result in the preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result. And searching for the feature information corresponding to the biometric information template according to the correspondence between the preset biometric information template and the feature information.
其中,所述特征识别包括:生物特征识别和环境特征识别。所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。The feature recognition includes: biometric recognition and environmental feature recognition. The feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
所述特征信息至少包括图像特征信息,所述图像特征信息可以包括图 片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。The feature information includes at least image feature information, and the image feature information may include a map The expression information of a person in a movie or a video image, such as laughing, angry, crying, etc., can reflect the mood of the character from the expression information, for example, the smile indicates that the mood is happy.
所述特征信息还可以包括环境特征信息;相应的,所述获取目标文件的特征信息还可以包括:在拍摄图片或视频时,获取当前拍摄环境的环境特征信息;环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种;拍摄地点信息可以通过移动终端(如手机)自带的全球定位系统(Global Positioning System,GPS)功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。The feature information may further include environment feature information. Correspondingly, the acquiring the feature information of the target file may further include: acquiring environment feature information of the current shooting environment when the picture or video is captured; and the environment feature information includes: shooting time information At least one of the shooting location information and the shooting weather information; the shooting location information can be obtained by a Global Positioning System (GPS) function of the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information. . The weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
其中,从图片或视频的图像中解析出图像特征信息包括:对图片中的图像或视频中目标帧的图像进行人脸识别,并对识别出的人脸图像进行生物特征信息提取;根据提取出的生物特征信息按照预设规则生成图像特征信息。The parsing the image feature information from the image of the image or the video comprises: performing face recognition on the image in the image or the image of the target frame in the video, and performing biometric information extraction on the recognized face image; The biometric information generates image feature information according to a preset rule.
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何特征图的中心点等数据。The face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition. The extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right. The corner feature points and upper and lower lip feature points, the center point of the geometric feature map, and the like.
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。The correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table. For example, the correspondence table may be as shown in Table 1.
Figure PCTCN2017082912-appb-000002
Figure PCTCN2017082912-appb-000002
表1Table 1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信 息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征存储到数据库中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。It should be understood that, in order to facilitate understanding of the solution of the embodiment, the content of the biometric information template in Table 1 is simplified, and the actual biometric information template adopts a facial expression feature letter. The geometrical feature information of the mouth is divided into: the left and right corner feature points and the upper and lower lip feature points. Each feature point has a position coordinate value, and the mouth angle can be calculated according to the coordinate values, and the upper and lower lips can be calculated. The height difference of the feature points. Different facial expression feature information, the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored to In the database.
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。The geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current The expression of a person in a photo or video image, which is used as image feature information.
其中,对视频中目标帧的图像进行人脸识别之前,还包括:按照预设方式从压缩存储的视频文件中提取出目标帧。Before performing face recognition on the image of the target frame in the video, the method further includes: extracting the target frame from the compressed stored video file according to a preset manner.
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施 例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。Generally, in order to save storage space, the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames. In order to restore the picture. Generally speaking, the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved. Usually, the I frame includes complete image information, the B frame is the difference between the current frame and the previous frame information, and the P frame is the difference between the current frame and the previous frame information. The compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved. If the picture is to be obtained through the B and P frames, the previous and subsequent frames are parsed, and the calculation amount is large. Therefore, in this embodiment, only the I frame is parsed. In order to ensure that the B and P frame images are not affected by the noise accumulation introduced by the preceding and succeeding frames during the parsing, the compressed video will be inserted into the I frame with the complete picture information in the fixed interval. The maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second). This implementation After parsing the video header information, the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
步骤S202:建立特征信息与目标文件的对应关系,基于所述特征信息及所述对应关系对所述目标文件进行存储;Step S202: Establish a correspondence between the feature information and the target file, and store the target file based on the feature information and the corresponding relationship;
这里,在获取到图片或视频的图像特征信息后,建立图像特征信息与图片或视频的对应关系。Here, after the image feature information of the picture or video is acquired, the correspondence between the image feature information and the picture or video is established.
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应关系,还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,并将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。For the same picture or video, in addition to establishing the correspondence between the image feature information and the picture or video, the correspondence between the shooting time information and/or the shooting location information and/or the shooting weather information may be established, and the shooting time information and / or shooting location information and / or shooting weather information, pictures or videos and correspondence saved.
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式进行保存,可以将其保存到移动终端(如手机)的数据库中,同时用户可以选择把日志上传云端,云端的数据库和移动终端本地保持一致,作为移动终端本地备份。日志默认的特征信息通过list链表数据结构进行保存。For the same picture or video, if the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image are established, it may also be generated by generating a log, which may be saved to the mobile terminal. In the database (such as mobile phone), the user can choose to upload the log to the cloud, and the cloud database and the mobile terminal are consistently localized as a local backup of the mobile terminal. The default feature information of the log is saved by the data structure of the list linked list.
例如,获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在本地数据库,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管理。 For example, get the time, location, weather, and mood information of the characters in the picture. Such as: July 10, Shanghai, sunny, happy. Corresponding relationship between the captured video or image and the corresponding time, place, weather, mood, etc., can be stored in the local database in the form of a log, and can also add information as needed, such as: safari, tourism, etc. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud. Through the above scheme, the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
步骤S203:接收调用请求,调用请求中携带目标文件的特征信息;Step S203: Receive a call request, and carry the feature information of the target file in the call request;
这里,所述目标文件包括图片或视频,所述特征信息至少包括图像特征信息。目标文件的特征信息至少包括从图片或视频的图像中解析出的图像特征信息,图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。特征信息还可以包括在拍摄图片或视频时的拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。Here, the object file includes a picture or a video, and the feature information includes at least image feature information. The feature information of the target file includes at least image feature information parsed from the image of the picture or the video, and the image feature information may include expression information of the person in the picture or the video image, such as laughing, angry, crying, etc., which may be reflected from the expression information. The mood of the character, such as laughing, indicates a happy mood. The feature information may further include at least one of shooting time information, shooting location information, and shooting weather information when the picture or video is captured.
步骤S204:在所述对应关系中查找与所述特征信息对应的目标文件;Step S204: Find an object file corresponding to the feature information in the correspondence relationship;
在一实施例中,针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式进行保存,日志默认的特征信息通过list链表数据结构进行保存。后续可以通过list链表、根据日志的特征信息对日志进行检索、查询,同时用户也能根据自身需要增加特征信息,并对这些特征信息进行编辑、修改。In an embodiment, for the same picture or video, if the image feature information, the shooting time information, the shooting location information, the corresponding relationship between the shooting weather information and the image are established, the log may be generated in the form of a log, and the log may be saved. The default feature information is saved by the list linked list data structure. Subsequently, the log can be retrieved and queried according to the log information of the log, and the user can also add feature information according to his own needs, and edit and modify the feature information.
例如可以通过时间、地点、心情等属性来查询日志、照片或视频。如:根据时间段2016.6.1-2016.6.30、旅游、心情愉快以及后续自行添加的关键词查询到对应的日志、照片或视频。由于对日志、照片或视频还提供了编辑、修改、查询、检索等功能,方便了都市白领记录生活状态、方便了忙碌的妈妈记录孩子成长。For example, you can query logs, photos, or videos by attributes such as time, location, and mood. For example, according to the time period 2016.6.1-2016.6.30, travel, happy mood and follow-up self-added keywords to query the corresponding log, photo or video. Because the log, photo or video also provides editing, modification, query, retrieval and other functions, it is convenient for urban white-collar workers to record the living status, and it is convenient for busy mothers to record the growth of children.
步骤S205:对查找出的目标文件进行调用;Step S205: calling the found target file;
在一实施例中,可以对查找出的目标文件进行显示、发送给朋友或家人、上传至QQ空间、微博、微信朋友圈。In an embodiment, the found target file may be displayed, sent to a friend or family member, uploaded to a QQ space, a microblog, or a WeChat circle of friends.
根据本公开实施例提供的一种文件调用方法,该文件调用方法包括接收调用请求,调用请求中携带目标文件的特征信息;根据特征信息以及如实施例一的信息处理方法中保存的对应关系,查找特征信息对应的目标文 件;对查找出的目标文件进行调用;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。A file invoking method according to an embodiment of the present disclosure, the file invoking method includes receiving an invoking request, and carrying the feature information of the object file in the invoking request; and according to the feature information and the correspondence relationship saved in the information processing method according to the first embodiment, Find the target text corresponding to the feature information Calling the found target file; using the above scheme, when the image or video is stored, the image or video is stored according to the feature information of the image or the video, which facilitates the user to quickly find and browse, and improves the user. Experience, better meet the needs of users.
实施例三Embodiment 3
本实施例提供一种信息处理装置,所述信息处理装置的组成结构,如图3所示,包括:The embodiment provides an information processing device. The composition of the information processing device, as shown in FIG. 3, includes:
获取模块301,配置为获取目标文件的特征信息;The obtaining module 301 is configured to acquire feature information of the target file.
其中,所述目标文件包括图片或视频。The target file includes a picture or a video.
对应关系建立模块302,配置为建立所述特征信息与所述目标文件的对应关系。The correspondence establishing module 302 is configured to establish a correspondence between the feature information and the target file.
保存模块303,配置为基于所述特征信息及所述对应关系对所述目标文件进行存储。The saving module 303 is configured to store the target file based on the feature information and the correspondence.
所述特征信息至少包括图像特征信息;所述图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。所述特征信息还可以包括环境特征信息。获取模块301还配置为在拍摄图片或视频时,获取当前拍摄环境的环境特征信息。环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。拍摄地点信息可以通过移动终端(如手机)自带的GPS功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。The feature information includes at least image feature information; the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. . The feature information may also include environmental feature information. The obtaining module 301 is further configured to acquire environmental feature information of the current shooting environment when taking a picture or video. The environmental feature information includes at least one of shooting time information, shooting location information, and shooting weather information. The shooting location information can be obtained by the GPS function provided by the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information. The weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
其中,所述获取模块301包括:获取子模块3011和确定子模块3012;所述获取子模块3011,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;所述确定子模块3012,配置为按照预设规则确定与所述特征识别结果匹配的特征信息;这里,所述特征识别包括:生物特征识别 和环境特征识别。The obtaining module 301 includes: an obtaining sub-module 3011 and a determining sub-module 3012. The obtaining sub-module 3011 is configured to perform feature recognition on an image in the target file to obtain a feature recognition result; the determining sub-module 3012, configured to determine, according to a preset rule, feature information that matches the feature recognition result; where the feature recognition includes: biometric identification And environmental feature recognition.
所述确定子模块3012,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The determining sub-module 3012 is configured to search for the feature recognition result in the preset biometric information template set, and obtain a biometric information template that matches the feature recognition result; according to the preset biometric information template and feature Corresponding relationships between the information, and finding feature information corresponding to the biometric information template.
所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。The feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何特征图的中心点等数据。The face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition. The extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right. The corner feature points and upper and lower lip feature points, the center point of the geometric feature map, and the like.
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。The correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table. For example, the correspondence table may be as shown in Table 1.
Figure PCTCN2017082912-appb-000003
Figure PCTCN2017082912-appb-000003
表1Table 1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征存储到数据库 中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。It should be understood that, in order to facilitate understanding of the solution of the embodiment, the content of the biometric information template in Table 1 is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information. Including: left and right corner feature points and upper and lower lip feature points, each feature point has a position coordinate value, according to which the height difference of the mouth angle up or down and the upper and lower lip feature points can be calculated. Different facial expression feature information, the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance. The biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored in the database.
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。The geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current The expression of a person in a photo or video image, which is used as image feature information.
其中,还包括第二提取模块304,用于按照预设方式从压缩存储的视频文件中提取出目标帧。The method further includes a second extraction module 304, configured to extract a target frame from the compressed stored video file according to a preset manner.
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。Generally, in order to save storage space, the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information, and usually is calculated according to the information of the preceding and succeeding frames. In order to restore the picture. Generally speaking, the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved. Usually, the I frame includes complete image information, the B frame is the difference between the current frame and the previous frame information, and the P frame is the difference between the current frame and the previous frame information. The compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved. If the picture is to be obtained through the B and P frames, the previous and subsequent frames are parsed, and the calculation amount is large. Therefore, in this embodiment, only the I frame is parsed. In order to ensure that the B and P frame images are not affected by the noise accumulation introduced by the preceding and succeeding frames during the parsing, the compressed video will be inserted into the I frame with the complete picture information in the fixed interval. The maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second). In this embodiment, after the video header information is parsed, the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
获取模块301在获取到图片或视频的图像特征信息后,对应关系建立模块302建立图像特征信息与图片或视频的对应关系,保存模块303将图像特征信息、图片或视频及对应关系保存。After acquiring the image feature information of the picture or video, the obtaining module 301 establishes a correspondence between the image feature information and the picture or video, and the saving module 303 saves the image feature information, the picture or the video, and the corresponding relationship.
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应 关系,对应关系建立模块302还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,保存模块303将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。For the same picture or video, in addition to establishing the correspondence between image feature information and pictures or videos The relationship, the correspondence relationship establishing module 302 can also establish a correspondence relationship between the shooting time information and/or the shooting location information and/or the shooting weather information, and the saving module 303 will take the shooting time information and/or the shooting location information and/or the shooting weather information. , pictures or videos and correspondences are saved.
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式保存到保存模块303中,同时用户可以选择把日志上传云端,云端的数据库和保存模块303保持一致。日志默认的特征信息通过list链表数据结构进行保存。For the same picture or video, if the image feature information, the shooting time information, the shooting location information, and the corresponding relationship between the shooting weather information and the image are established, the log may be saved in the save module 303, and the user may The log is uploaded to the cloud, and the cloud database and the save module 303 are consistent. The default feature information of the log is saved by the data structure of the list linked list.
例如,获取模块301获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。对应关系建立模块302把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在保存模块303中,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管理。For example, the acquisition module 301 acquires the time, location, weather, and mood information of the person in the picture when the picture is taken. Such as: July 10, Shanghai, sunny, happy. The correspondence relationship establishing module 302 associates the captured video or image with the corresponding time, location, weather, mood, and the like, and may be saved in the save module 303 in the form of a log, and may also add information as needed, such as: Zoo, travel, etc. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud. Through the above scheme, the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
根据本公开实施例提供的一种信息处理装置,该信息处理装置包括:获取模块301获取目标文件的特征信息,目标文件包括图片或视频,获取目标文件的特征信息至少包括从图片或视频的图像中解析出图像特征信息;对应关系建立模块302建立特征信息与目标文件的对应关系,保存模块303将特征信息、目标文件及对应关系保存;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求。 An information processing apparatus according to an embodiment of the present disclosure includes: the acquisition module 301 acquires feature information of an object file, the object file includes a picture or a video, and the feature information of the acquired object file includes at least an image from the picture or video. The image feature information is parsed; the correspondence relationship establishing module 302 establishes a correspondence relationship between the feature information and the target file, and the saving module 303 saves the feature information, the object file, and the corresponding relationship; and when the image or video is stored, according to the above scheme, The feature information of the picture or video is classified and stored in the picture or video, which facilitates the user to quickly find and browse, improves the user experience, and better meets the user's needs.
实施例四Embodiment 4
本实施例提供一种信息处理装置,所述信息处理装置的组成结构如图4,包括:The embodiment provides an information processing device. The composition of the information processing device is as shown in FIG. 4, and includes:
获取模块301,配置为获取目标文件的特征信息;The obtaining module 301 is configured to acquire feature information of the target file.
其中,所述目标文件包括图片或视频。The target file includes a picture or a video.
对应关系建立模块302,配置为建立所述特征信息与所述目标文件的对应关系。The correspondence establishing module 302 is configured to establish a correspondence between the feature information and the target file.
保存模块303,配置为基于所述特征信息及所述对应关系对所述目标文件进行存储。The saving module 303 is configured to store the target file based on the feature information and the correspondence.
所述特征信息至少包括图像特征信息;所述图像特征信息可以包括图片或视频图像中人物的表情信息,例如笑、生气、哭等,从表情信息可以反映出人物的心情,例如笑表示心情愉快。所述特征信息还可以包括环境特征信息。获取模块301还配置为在拍摄图片或视频时,获取当前拍摄环境的环境特征信息。环境特征信息包括:拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。拍摄地点信息可以通过移动终端(如手机)自带的GPS功能得到,GPS信息通过网络查询得到拍摄地点信息。拍摄天气信息可以根据拍摄时的拍摄时间信息、拍摄地点信息,通过网络查询得到。The feature information includes at least image feature information; the image feature information may include expression information of a person in a picture or a video image, such as laughing, angry, crying, etc., and the expression information may reflect the mood of the character, for example, laughing indicates that the mood is happy. . The feature information may also include environmental feature information. The obtaining module 301 is further configured to acquire environmental feature information of the current shooting environment when taking a picture or video. The environmental feature information includes at least one of shooting time information, shooting location information, and shooting weather information. The shooting location information can be obtained by the GPS function provided by the mobile terminal (such as a mobile phone), and the GPS information is obtained through the network to obtain the shooting location information. The weather information can be obtained by querying the network according to the shooting time information and shooting location information at the time of shooting.
接收模块304,配置为接收调用请求,调用请求中携带目标文件的特征信息。The receiving module 304 is configured to receive a call request, and the feature information of the target file is carried in the call request.
查找模块305,配置为在所述对应关系中查找所述特征信息对应的目标文件。The searching module 305 is configured to search for the target file corresponding to the feature information in the correspondence relationship.
调用模块306,配置为对所述目标文件进行调用。The calling module 306 is configured to make a call to the target file.
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志 的形式进行保存,日志默认的特征信息通过list链表数据结构进行保存。后续可以通过list链表、根据日志的特征信息对日志进行检索、查询,同时用户也能根据自身需要增加特征信息,并对这些特征信息进行编辑、修改。For the same picture or video, if image feature information, shooting time information, shooting location information, shooting weather information and the corresponding relationship of the image are established, it is also possible to generate a log. The form is saved, and the default feature information of the log is saved by the data structure of the list linked list. Subsequently, the log can be retrieved and queried according to the log information of the log, and the user can also add feature information according to his own needs, and edit and modify the feature information.
例如可以通过时间、地点、心情等属性来查询日志、照片或视频。如:根据时间段2016.6.1-2016.6.30、旅游、心情愉快以及后续自行添加的关键词查询到对应的日志、照片或视频。由于对日志、照片或视频还提供了编辑、修改、查询、检索等功能,方便了都市白领记录生活状态、方便了忙碌的妈妈记录孩子成长。For example, you can query logs, photos, or videos by attributes such as time, location, and mood. For example, according to the time period 2016.6.1-2016.6.30, travel, happy mood and follow-up self-added keywords to query the corresponding log, photo or video. Because the log, photo or video also provides editing, modification, query, retrieval and other functions, it is convenient for urban white-collar workers to record the living status, and it is convenient for busy mothers to record the growth of children.
调用模块306,用于对查找模块305查找出的目标文件进行调用;The calling module 306 is configured to invoke the target file found by the searching module 305;
在一实施例中,可以对查找出的目标文件进行显示、发送给朋友或家人、上传至QQ空间、微博、微信朋友圈。In an embodiment, the found target file may be displayed, sent to a friend or family member, uploaded to a QQ space, a microblog, or a WeChat circle of friends.
其中,所述获取模块301包括:获取子模块3011和确定子模块3012;所述获取子模块3011,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;所述确定子模块3012,配置为按照预设规则确定与所述特征识别结果匹配的特征信息;这里,所述特征识别包括:生物特征识别和环境特征识别。The obtaining module 301 includes: an obtaining sub-module 3011 and a determining sub-module 3012. The obtaining sub-module 3011 is configured to perform feature recognition on an image in the target file to obtain a feature recognition result; the determining sub-module 3012. The method is configured to determine, according to a preset rule, feature information that matches the feature recognition result; where the feature recognition includes: biometric recognition and environmental feature recognition.
所述确定子模块3012,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The determining sub-module 3012 is configured to search for the feature recognition result in the preset biometric information template set, and obtain a biometric information template that matches the feature recognition result; according to the preset biometric information template and feature Corresponding relationships between the information, and finding feature information corresponding to the biometric information template.
所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。The feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, shooting time information, shooting location information, and shooting weather information.
可以利用人脸识别算法,对人脸识别中成功识别到的人脸图像进行生物特征信息提取,提取的嘴角信息、嘴唇信息、上下嘴唇高度信息可以是由嘴部几何特征图反映出,包括左右嘴角特征点和上下嘴唇特征点,几何 特征图的中心点等数据。The face recognition algorithm can be used to extract the biometric information of the face image successfully recognized in the face recognition. The extracted mouth angle information, lip information, and upper and lower lip height information can be reflected by the geometrical feature map of the mouth, including left and right. Mouth corner feature points and upper and lower lip feature points, geometry Data such as the center point of the feature map.
预设的生物特征信息模板与图像特征信息之间的对应关系可以保存为一个对应关系表,例如,对应关系表可以如下表1。The correspondence between the preset biometric information template and the image feature information may be saved as a correspondence table. For example, the correspondence table may be as shown in Table 1.
Figure PCTCN2017082912-appb-000004
Figure PCTCN2017082912-appb-000004
表1Table 1
应当理解的是,为了便于理解本实施例的方案,表1中生物特征信息模板的内容是被简化后的,实际的生物特征信息模板采用面部表情特征信息中的嘴部几何特征信息进行划分,包括:左右嘴角特征点和上下嘴唇特征点,每个特征点都有一个位置坐标值,可以根据这些坐标值计算得到嘴角上扬或下扬弧度以及上下嘴唇特征点的高度差。不同的面部表情特征信息,对应的嘴角弧度和高度差都是不同的;当微笑或者大笑时,嘴角几何轮廓是上扬的,嘴角弧度和高度差都是正值,并且在一个预先设定的阀值范围内;这些面部表情特征信息会预先按照不同的表情特征存储到数据库中,也即把生物特征信息模板存储到数据库中,并把生物特征信息模板与图像特征信息的对应关系表存储到数据库中。It should be understood that, in order to facilitate understanding of the solution of the embodiment, the content of the biometric information template in Table 1 is simplified, and the actual biometric information template is divided by the geometrical feature information of the mouth in the facial expression feature information. Including: left and right corner feature points and upper and lower lip feature points, each feature point has a position coordinate value, according to which the height difference of the mouth angle up or down and the upper and lower lip feature points can be calculated. Different facial expression feature information, the corresponding corner angle and height difference are different; when smiling or laughing, the corner geometrical contour is raised, the corner angle and height difference are positive, and in a preset Within the threshold range; these facial expression feature information will be stored in the database according to different expression features in advance, that is, the biometric information template is stored in the database, and the correspondence table between the biometric information template and the image feature information is stored to In the database.
可以通过人脸识别提取到的嘴部几何特征信息与数据库中预先归类的生物特征信息模板进行逐一比对,得到不同的相似度值,取相似度最高的值所对应的图像特征信息作为当前照片或视频图像中人物的表情,即将其作为图像特征信息。The geometric feature information extracted by the face recognition and the biometric information template pre-categorized in the database are compared one by one to obtain different similarity values, and the image feature information corresponding to the highest similarity value is taken as the current The expression of a person in a photo or video image, which is used as image feature information.
其中,还包括第二提取模块304,用于按照预设方式从压缩存储的视频文件中提取出目标帧。The method further includes a second extraction module 304, configured to extract a target frame from the compressed stored video file according to a preset manner.
通常拍摄的视频为了节省存储空间都是采用的如H.264等压缩算法进行视频的编解码,而压缩视频中的每一帧图片不一定具有完整的信息,通 常要根据前后帧的信息通过计算才能还原图片。一般来说,视频的帧分为I、B、P三种类型,保存了色度、亮度等信息。通常I帧包括完整的图像信息,B帧是当前帧和前后帧信息的差值,P帧是当前帧和前帧信息的差值。压缩视频主要是通过B帧、P帧保存是当前帧和前后的信息的差值来减少保存和传输的信息量来实现减小视频的大小。如果要通过B、P帧来得到图片要对前后帧都进行解析,计算量较大,因此在本实施例里只对I帧进行解析。压缩视频为了保证在解析时为了保证还原B、P帧图像时不会因为前后帧引入的噪音累加等原因对后续还原视频的影响,在固定间隔内会插入包含完整图片信息的I帧,最大间隔为300帧(通常帧率大于30帧/秒)。本实施例对视频头信息解析后,得到I帧的间隔数,然后把全部I帧解析出来,即提取出了目标帧。Generally, in order to save storage space, the video is encoded and decoded by a compression algorithm such as H.264, and each frame of the compressed video does not necessarily have complete information. It is often necessary to calculate the image based on the previous and subsequent frames to restore the image. Generally speaking, the frames of the video are divided into three types: I, B, and P, and information such as chromaticity and brightness is saved. Usually, the I frame includes complete image information, the B frame is the difference between the current frame and the previous frame information, and the P frame is the difference between the current frame and the previous frame information. The compressed video mainly reduces the size of the video by reducing the amount of information stored and transmitted by the difference between the current frame and the information before and after the B frame and the P frame are saved. If the picture is to be obtained through the B and P frames, the previous and subsequent frames are parsed, and the calculation amount is large. Therefore, in this embodiment, only the I frame is parsed. In order to ensure that the B and P frame images are not affected by the noise accumulation introduced by the preceding and succeeding frames during the parsing, the compressed video will be inserted into the I frame with the complete picture information in the fixed interval. The maximum interval is It is 300 frames (usually the frame rate is greater than 30 frames/second). In this embodiment, after the video header information is parsed, the number of intervals of the I frame is obtained, and then all the I frames are parsed, that is, the target frame is extracted.
获取模块301在获取到图片或视频的图像特征信息后,对应关系建立模块302建立图像特征信息与图片或视频的对应关系,保存模块303将图像特征信息、图片或视频及对应关系保存。After acquiring the image feature information of the picture or video, the obtaining module 301 establishes a correspondence between the image feature information and the picture or video, and the saving module 303 saves the image feature information, the picture or the video, and the corresponding relationship.
针对同一个图片或视频,除了建立图像特征信息与图片或视频的对应关系,对应关系建立模块302还可以建立拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息之间的对应关系,保存模块303将拍摄时间信息和/或拍摄地点信息和/或拍摄天气信息、图片或视频及对应关系保存。For the same picture or video, in addition to establishing the correspondence between the image feature information and the picture or video, the correspondence relationship establishing module 302 can also establish a correspondence relationship between the shooting time information and/or the shooting location information and/or the shooting weather information, and save The module 303 saves the shooting time information and/or shooting location information and/or shooting weather information, pictures or videos, and correspondences.
针对同一个图片或视频,若建立了图像特征信息、拍摄时间信息、拍摄地点信息、拍摄天气信息与该图像的对应关系,还可以是生成一个日志的形式保存到保存模块303中,同时用户可以选择把日志上传云端,云端的数据库和保存模块303保持一致。日志默认的特征信息通过list链表数据结构进行保存。For the same picture or video, if the image feature information, the shooting time information, the shooting location information, and the corresponding relationship between the shooting weather information and the image are established, the log may be saved in the save module 303, and the user may The log is uploaded to the cloud, and the cloud database and the save module 303 are consistent. The default feature information of the log is saved by the data structure of the list linked list.
例如,获取模块301获取图片拍摄时的时间、地点、天气,以及图片中人物的心情信息。如:7月10日,上海,晴,心情愉快。对应关系建立 模块302把拍摄的视频或图像,及对应的时间、地点、天气、心情等信息建立对应关系,可以以日志的形式保存在保存模块303中,还可以根据需要增加信息,如:野生动物园,旅游等。并提供用户选择是否上传云端,如果用户当时没有上传,后续也可以根据需要随时上传。本地日志删除后,也可以从云端下载恢复,或同步删除云端对应内容。通过上述方案可以准确实时对拍摄的视频或图像通过时间、地点、天气、心情等信息自动生成文字或表情包、形成日志,便于用户管理。For example, the acquisition module 301 acquires the time, location, weather, and mood information of the person in the picture when the picture is taken. Such as: July 10, Shanghai, sunny, happy. Correspondence relationship establishment The module 302 associates the captured video or image with the corresponding time, location, weather, mood, and the like, and may be saved in the save module 303 in the form of a log, and may also add information as needed, such as: safari, travel Wait. And provide users to choose whether to upload the cloud, if the user did not upload at that time, the follow-up can also be uploaded at any time as needed. After the local log is deleted, you can also download and restore from the cloud, or delete the corresponding content in the cloud. Through the above scheme, the captured video or image can be automatically generated in time, place, weather, mood and other information to automatically generate text or expression packets and form a log, which is convenient for user management.
在另一实施例中还提供一种计算机存储介质,计算机存储介质中存储有计算机可执行指令,计算机可执行指令用于执行上述的信息处理方法。In another embodiment, a computer storage medium is stored, the computer storage medium storing computer executable instructions for executing the information processing method described above.
本公开实施例中提出的信息处理装置中的获取模块301、对应关系建立模块302、保存模块303、接收模块304、查找模块305和调用模块306都可以通过处理器来实现,当然也可通过具体的逻辑电路实现;其中所述处理器可以是位于电子设备上的服务器,在实际应用中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。The obtaining module 301, the correspondence establishing module 302, the saving module 303, the receiving module 304, the searching module 305, and the calling module 306 in the information processing apparatus proposed in the embodiment of the present disclosure may all be implemented by a processor, and may also be specifically The logic circuit is implemented; wherein the processor may be a server located on an electronic device, and in practical applications, the processor may be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or Field programmable gate array (FPGA), etc.
显然,本领域的技术人员应该明白,上述本公开实施例的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储介质(ROM/RAM、磁碟、光盘)中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。所以,本公开不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that the modules or steps of the above embodiments of the present disclosure may be implemented by a general computing device, which may be concentrated on a single computing device or distributed among multiple computing devices. On the network, optionally, they may be implemented by program code executable by the computing device, such that they may be stored in a storage medium (ROM/RAM, disk, optical disk) by a computing device, and in some In this case, the steps shown or described may be performed in an order different from that herein, or they may be separately fabricated into individual integrated circuit modules, or a plurality of the modules or steps may be implemented as a single integrated circuit module. Therefore, the present disclosure is not limited to any specific combination of hardware and software.
以上内容是结合具体的实施方式对本公开实施例所作的进一步详细说明,不能认定本公开的具体实施只局限于这些说明。对于本公开所属技术 领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本公开的保护范围。The above content is a further detailed description of the embodiments of the present disclosure in conjunction with the specific embodiments, and the specific implementation of the present disclosure is not limited to the description. For the technology to which the present disclosure pertains A person skilled in the art can make several simple deductions or substitutions without departing from the scope of the present disclosure, and should be considered as the scope of protection of the present disclosure.
工业实用性Industrial applicability
本公开实施例提供的信息处理方法,获取目标文件的特征信息;建立所述特征信息与所述目标文件的对应关系;基于所述特征信息及所述对应关系对所述目标文件进行分类存储;采用上述方案,在对图片或视频进行存储时,按照图片或视频的特征信息对图片或视频进行分类存储,方便了用户进行快速查找和浏览,提升了用户体验,更好的满足了用户需求 The information processing method provided by the embodiment of the present disclosure acquires feature information of the target file, establishes a correspondence between the feature information and the target file, and classifies and stores the target file based on the feature information and the correspondence relationship; The above solution is used to store pictures or videos according to the feature information of the pictures or videos, so that the users can quickly find and browse, which improves the user experience and better meets the user's needs.

Claims (11)

  1. 一种信息处理方法,所述方法包括:An information processing method, the method comprising:
    获取目标文件的特征信息;Obtaining feature information of the target file;
    建立所述特征信息与所述目标文件的对应关系;Establishing a correspondence between the feature information and the target file;
    基于所述特征信息及所述对应关系对所述目标文件进行分类存储。And classifying and storing the target file based on the feature information and the correspondence.
  2. 如权利要求1所述的信息处理方法,其中,所述获取目标文件的特征信息,包括:The information processing method according to claim 1, wherein the acquiring the feature information of the target file comprises:
    对所述目标文件中的图像进行特征识别,得到特征识别结果;Performing feature recognition on the image in the target file to obtain a feature recognition result;
    按照预设规则确定与所述特征识别结果匹配的特征信息。The feature information matching the feature recognition result is determined according to a preset rule.
  3. 如权利要求2所述的信息处理方法,其中,所述按照预设规则确定与所述特征识别结果匹配的特征信息,包括:The information processing method according to claim 2, wherein the determining the feature information that matches the feature recognition result according to the preset rule comprises:
    在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;Finding the feature recognition result in a preset biometric information template set, and obtaining a biometric information template that matches the feature recognition result;
    根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The feature information corresponding to the biometric information template is found according to a correspondence between the preset biometric information template and the feature information.
  4. 如权利要求2所述的信息处理方法,其中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。The information processing method according to claim 2, wherein the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, photographing time information, photographing location information, and photographing weather information.
  5. 如权利要求1所述的信息处理方法,其中,所述方法还包括:The information processing method according to claim 1, wherein the method further comprises:
    接收调用请求信息,所述调用请求信息中携带目标文件的特征信息;Receiving call request information, where the call request information carries feature information of the target file;
    在所述对应关系中查找与所述特征信息对应的目标文件;Finding an object file corresponding to the feature information in the correspondence relationship;
    对所述目标文件进行调用。Make a call to the target file.
  6. 一种信息处理装置,包括:An information processing apparatus comprising:
    获取模块,配置为获取目标文件的特征信息;Obtaining a module, configured to obtain feature information of the target file;
    对应关系建立模块,配置为建立所述特征信息与所述目标文件的对 应关系;Corresponding relationship establishing module, configured to establish a pair of the feature information and the target file Should be related;
    保存模块,配置为基于所述特征信息及所述对应关系对所述目标文件进行分类存储。The saving module is configured to classify and store the target file based on the feature information and the correspondence.
  7. 如权利要求6所述的信息处理装置,其中,所述获取模块包括:The information processing device according to claim 6, wherein the acquisition module comprises:
    第一提取子模块,配置为对所述目标文件中的图像进行特征识别,得到特征识别结果;a first extraction submodule configured to perform feature recognition on an image in the target file to obtain a feature recognition result;
    确定子模块,配置为按照预设规则确定与所述特征识别结果匹配的特征信息。Determining a sub-module configured to determine feature information that matches the feature recognition result according to a preset rule.
  8. 如权利要求7所述的信息处理装置,其中,所述确定子模块,配置为在预设的生物特征信息模板集合中查找所述特征识别结果,得到与所述特征识别结果匹配的生物特征信息模板;The information processing apparatus according to claim 7, wherein the determining submodule is configured to search for the feature recognition result in a preset biometric information template set, and obtain biometric information that matches the feature recognition result. template;
    根据预设的生物特征信息模板与特征信息之间的对应关系,查找出与所述生物特征信息模板对应的特征信息。The feature information corresponding to the biometric information template is found according to a correspondence between the preset biometric information template and the feature information.
  9. 如权利要求7所述的信息处理装置,其中,所述特征识别结果包括:嘴角信息、嘴唇信息、上下嘴唇高度信息、拍摄时间信息、拍摄地点信息、拍摄天气信息中的至少一种。The information processing device according to claim 7, wherein the feature recognition result includes at least one of mouth angle information, lip information, upper and lower lip height information, photographing time information, photographing location information, and photographing weather information.
  10. 如权利要求6所述的信息处理装置,其中,所述装置还包括:The information processing device of claim 6, wherein the device further comprises:
    接收模块,配置为接收调用请求,所述调用请求中携带目标文件的特征信息;a receiving module, configured to receive a call request, where the call request carries feature information of the target file;
    查找模块,配置为在所述对应关系中查找所述特征信息对应的目标文件;a search module, configured to search for a target file corresponding to the feature information in the correspondence relationship;
    调用模块,配置为对所述目标文件进行调用。Calling a module configured to make a call to the target file.
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至5任一项所述的路由信息处理方法。 A computer storage medium storing computer executable instructions for executing the routing information processing method according to any one of claims 1 to 5.
PCT/CN2017/082912 2016-08-09 2017-05-03 Information processing method, apparatus and storage medium WO2018028253A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610647621.8 2016-08-09
CN201610647621.8A CN107704471A (en) 2016-08-09 2016-08-09 A kind of information processing method and device and file call method and device

Publications (1)

Publication Number Publication Date
WO2018028253A1 true WO2018028253A1 (en) 2018-02-15

Family

ID=61162627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/082912 WO2018028253A1 (en) 2016-08-09 2017-05-03 Information processing method, apparatus and storage medium

Country Status (2)

Country Link
CN (1) CN107704471A (en)
WO (1) WO2018028253A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472098A (en) * 2019-08-20 2019-11-19 北京达佳互联信息技术有限公司 Determination method, apparatus, electronic equipment and the storage medium of video content topic
CN113660482A (en) * 2021-07-28 2021-11-16 上海立可芯半导体科技有限公司 Automatic testing method and device for AI camera equipment or module

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965588A (en) * 2018-06-27 2018-12-07 维沃移动通信有限公司 A kind of information cuing method and mobile terminal
CN111046814A (en) * 2019-12-18 2020-04-21 维沃移动通信有限公司 Image processing method and electronic device
CN111221782B (en) * 2020-01-17 2024-04-09 惠州Tcl移动通信有限公司 File searching method and device, storage medium and mobile terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744086A (en) * 2004-09-01 2006-03-08 松下电器产业株式会社 Image file processing method and related technique thereof
CN101055592A (en) * 2007-05-20 2007-10-17 宁尚国 Image information generation method and device
CN103226575A (en) * 2013-04-01 2013-07-31 北京小米科技有限责任公司 Image processing method and device
CN105243084A (en) * 2015-09-07 2016-01-13 广东欧珀移动通信有限公司 Photographed image file storage method and system and photographed image file search method and system
CN105787131A (en) * 2016-03-31 2016-07-20 宇龙计算机通信科技(深圳)有限公司 Information processing method and device and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744086A (en) * 2004-09-01 2006-03-08 松下电器产业株式会社 Image file processing method and related technique thereof
CN101055592A (en) * 2007-05-20 2007-10-17 宁尚国 Image information generation method and device
CN103226575A (en) * 2013-04-01 2013-07-31 北京小米科技有限责任公司 Image processing method and device
CN105243084A (en) * 2015-09-07 2016-01-13 广东欧珀移动通信有限公司 Photographed image file storage method and system and photographed image file search method and system
CN105787131A (en) * 2016-03-31 2016-07-20 宇龙计算机通信科技(深圳)有限公司 Information processing method and device and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472098A (en) * 2019-08-20 2019-11-19 北京达佳互联信息技术有限公司 Determination method, apparatus, electronic equipment and the storage medium of video content topic
CN113660482A (en) * 2021-07-28 2021-11-16 上海立可芯半导体科技有限公司 Automatic testing method and device for AI camera equipment or module

Also Published As

Publication number Publication date
CN107704471A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
US10885100B2 (en) Thumbnail-based image sharing method and terminal
WO2018028253A1 (en) Information processing method, apparatus and storage medium
US10013600B2 (en) Digital image processing method and apparatus, and storage medium
US8810684B2 (en) Tagging images in a mobile communications device using a contacts list
US9830727B2 (en) Personalizing image capture
CN104572732A (en) Method and device for inquiring user identification and method and device for acquiring user identification
WO2016127478A1 (en) Image processing method and device, and terminal
CN105654039B (en) The method and apparatus of image procossing
WO2017054442A1 (en) Image information recognition processing method and device, and computer storage medium
TW201508520A (en) Method, Server and System for Setting Background Image
WO2019218459A1 (en) Photo storage method, storage medium, server, and apparatus
US11917158B2 (en) Static video recognition
CN105979363A (en) Identity identification method and device
WO2021104097A1 (en) Meme generation method and apparatus, and terminal device
WO2021115483A1 (en) Image processing method and related apparatus
WO2017067485A1 (en) Picture management method and device, and terminal
US20170192965A1 (en) Method and apparatus for smart album generation
WO2017101323A1 (en) Method and device for image capturing and information pushing and mobile phone
CN104331515A (en) Method and system for generating travel journal automatically
WO2015196681A1 (en) Picture processing method and electronic device
CN111316628A (en) Image shooting method and image shooting system based on intelligent terminal
KR101715708B1 (en) Automated System for Providing Relation Related Tag Using Image Analysis and Method Using Same
US20170200062A1 (en) Method of determination of stable zones within an image stream, and portable device for implementing the method
US20170171462A1 (en) Image Collection Method, Information Push Method and Electronic Device, and Mobile Phone
US20230388663A1 (en) Image frame selection for multi-frame fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17838385

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17838385

Country of ref document: EP

Kind code of ref document: A1