WO2018170695A1 - 一种表观特征的描述属性识别方法及装置 - Google Patents

一种表观特征的描述属性识别方法及装置 Download PDF

Info

Publication number
WO2018170695A1
WO2018170695A1 PCT/CN2017/077366 CN2017077366W WO2018170695A1 WO 2018170695 A1 WO2018170695 A1 WO 2018170695A1 CN 2017077366 W CN2017077366 W CN 2017077366W WO 2018170695 A1 WO2018170695 A1 WO 2018170695A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
attribute
apparent
location
character
Prior art date
Application number
PCT/CN2017/077366
Other languages
English (en)
French (fr)
Inventor
姚春凤
冯柏岚
李德丰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to BR112019019517A priority Critical patent/BR112019019517A8/pt
Priority to EP17902197.7A priority patent/EP3591580A4/en
Priority to JP2019551650A priority patent/JP6936866B2/ja
Priority to CN201780088761.9A priority patent/CN110678878B/zh
Priority to PCT/CN2017/077366 priority patent/WO2018170695A1/zh
Priority to KR1020197030463A priority patent/KR102331651B1/ko
Publication of WO2018170695A1 publication Critical patent/WO2018170695A1/zh
Priority to US16/577,470 priority patent/US11410411B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a description attribute identification method and apparatus for an apparent feature.
  • Pedestrian attribute recognition is a pattern recognition technology used to identify the description attributes of the apparent features of pedestrians in surveillance video.
  • Pedestrians' appearance characteristics include gender, age, body, clothing, hair, accessories, orientation, etc.
  • Each apparent feature includes several descriptive attributes. For example, when the apparent feature is gender, the gender description attributes include male and Female.
  • the apparent feature is hair, the description attribute of the hair includes long hair and short hair, and the apparent characteristics of the hair may also include other descriptive attributes, such as distinguishing the description attribute of the hair according to the color, and the description attribute of the hair includes white. , black and brown, etc.
  • the object of pedestrian attribute recognition is the target image of any angle captured by the camera. The purpose is to reduce the search difficulty of visual information and improve the accuracy and speed of visual information recognition by identifying the attribute of the pedestrian's apparent features.
  • the invention discloses a description attribute identification method and device for apparent features, so as to realize targeted selection of recognition regions related to description attributes of apparent features in a target image according to different apparent features, so as to reduce meaninglessness.
  • the image processing operation reduces the workload of computer image processing.
  • a first aspect provides a description attribute identification method of an apparent feature, the method being performed by an image processing apparatus, comprising: acquiring a target image, the target image including a character, the apparent feature being used to represent a appearance of the character a feature to which the characteristic belongs, the apparent feature having a local attribute, the local attribute being used to indicate that the image processing device processes the target image as a local process, by acquiring an apparent feature of the target image a position feature, a position of the part of the character embodied by the apparent feature in a preset character model, and a position feature of the apparent feature for indicating the character represented by the apparent feature Positioning the part in the preset character model to identify the target area according to the position feature, the target area including the part of the character; then performing feature analysis on the target area to identify the apparent feature of the character Description attribute.
  • the target areas in which the parts of the characters represented by the apparent features in the target image are selected are selected as the identification of the feature analysis.
  • the area reduces the meaningless recognition area, simplifies the image processing operation process, saves the recognition time of the description attribute, and reduces the workload of computer image processing.
  • the method further includes: receiving a location attribute that the apparent feature has, the location attribute being used to indicate that the apparent feature is a local attribute.
  • the method further includes: acquiring a location attribute of the apparent feature of the character in a correspondence between the apparent feature and the location attribute stored in advance, where the location attribute is used The indication that the apparent feature is a local attribute.
  • the method further includes: moving the target area in a specified direction centering on the target area to obtain one or more offset areas; performing feature analysis on the offset area to identify other features of the person's apparent features Defining an attribute; determining, according to a preset algorithm, a target description attribute from the description attribute and the other description attribute, the target description attribute being a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the method further includes: extending the offset area or the target area to the center of the offset area or the target area Obtaining one or more candidate regions; performing feature analysis on the candidate regions to identify other description attributes of the apparent features of the characters; determining target descriptions from the description attributes and the other description attributes according to a preset algorithm An attribute, the target description attribute being a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the target area or the offset area is extended to the periphery to adjust the position, and the target area that does not include the part of the character or only a part of the part including the person obtained by the unclear target image is extended by the target area.
  • the candidate area including the part of the character can be obtained, and the target area or the offset area of the incomplete person part obtained due to the unclear target image is reduced, thereby causing a risk of a large recognition error of the description attribute.
  • the target area is moved in a specified direction to obtain one or more offset areas, centering on the target area
  • the method includes: dividing the target area into a plurality of block maps, the plurality of block maps having the same shape and the plurality of block maps being continuous; centering on the target area, and One block map is an offset unit, and the target area is offset in one or more directions by one or more offset units to obtain the one or more offset areas, wherein the size of each offset area The same size as the target area.
  • the acquiring the location feature of the apparent feature includes: receiving, including the appearance Information of a location feature of the feature, the information being used to indicate a location feature of the apparent feature.
  • the acquiring the location feature of the apparent feature includes: querying a pre-stored location Corresponding relationship between the apparent feature and the location feature; obtaining the location feature of the apparent feature according to the apparent feature and the corresponding relationship.
  • a description attribute identification method for an apparent feature comprising: acquiring a target image, the target image comprising a character; the apparent feature being used to represent the character a type to which the characteristic of the appearance belongs, the apparent feature having a local attribute for indicating that the image processing apparatus processes the target image as a local processing; by acquiring an apparent feature of the character Determining, by the first location feature and the second location feature, a location of the first portion and the second portion of the character embodied in the apparent feature in a preset character model, the first location feature being used to indicate the The position of the first part of the character in the preset character model embodied by the apparent feature, the second position feature being used to indicate that the second part of the character embodied by the apparent feature is in the pre- Positioning in the character model to obtain a maximum distance between the first portion and the second portion according to the first position feature and the second position feature; Identifying said maximum distance a target area, the target area including the
  • the target areas in which the plurality of parts of the person represented by the apparent features in the target image are selected are selected in a targeted manner as
  • the recognition area of the feature analysis reduces the meaningless recognition area, simplifies the image processing operation process, saves the recognition time of the description attribute, and reduces the workload of computer image processing.
  • the maximum distance is less than a preset threshold. Whether the image processing device performs local processing or global processing on the target image by the preset threshold. If the maximum distance is less than the preset threshold, instructing the image processing device to perform local processing on the target image, and if the maximum distance is greater than or equal to the preset threshold, instructing the image processing device to globally process the target image, The global processing is to process the target image.
  • the method further includes: receiving a location attribute that the apparent feature has, the location attribute being used to indicate that the apparent feature is Local attribute.
  • the method further includes: acquiring an apparent feature of the character in a correspondence between the apparent feature and the location attribute stored in advance There is a location attribute, the location attribute is used to indicate that the apparent feature is a local attribute.
  • the first location feature and the second location feature of the apparent feature of the character are obtained, including : receiving information including a first location feature and a second location feature of the apparent feature, the information being used to indicate a first location feature and a second location feature of the apparent feature.
  • the first location feature and the second location feature of the apparent feature of the character are obtained, including Querying a correspondence between the apparent features stored in advance and the first location feature and the second location feature respectively; acquiring the first location feature and the second feature of the apparent feature according to the apparent feature and the correspondence relationship Location feature.
  • the sixth implementation further comprising: moving the target area to a specified direction centering on the target area Obtaining one or more offset regions; performing feature analysis on the offset region to identify other description attributes of the apparent features of the character; determining from the description attribute and the other description attributes according to a preset algorithm
  • the target description attribute is a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the method further includes: extending the offset area or the target area outward to obtain one or a plurality of candidate regions; performing feature analysis on the candidate regions to identify other description attributes of the apparent features of the characters; determining target description attributes from the description attributes and the other description attributes according to a preset algorithm,
  • the target description attribute is a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the target area or the offset area is extended to the periphery to adjust the position, and the unobtained target image is obtained without including the person
  • the part of the object or the target area including only a part of the part of the person after extending the target area or the offset area, the candidate area including the part of the person can be obtained, and the incomplete person part obtained due to the unclear target image is reduced.
  • the target area or the offset area causes a risk of describing the attribute identification error, which improves the accuracy of the description attribute identification.
  • the target area is moved in a specified direction to obtain one or more offset areas, centering on the target area
  • the method includes: dividing the target area into a plurality of block maps, the plurality of block maps having the same shape and the plurality of block maps being continuous; centering on the target area, and One block map is an offset unit, and the target area is offset in one or more directions by one or more offset units to obtain the one or more offset areas, wherein the size of each offset area The same size as the target area.
  • a third aspect provides a description attribute identification method for an apparent feature, the method being performed by the image processing apparatus, comprising the steps of: acquiring a target image, the target image including a character; performing feature analysis on the target image Defining a description attribute of an apparent feature of a character in the target image; the apparent feature is used to indicate a type to which a characteristic of the appearance of the character belongs, the description attribute being used to identify a appearance of the character
  • the feature the apparent feature has a global attribute, the global attribute is used to identify that the processing of the target image is global processing.
  • the target image is directly selected as the recognition region for feature analysis, and the segmentation feature analysis is not performed on the target image, which simplifies the image processing operation process and saves the description attribute. Identifying time reduces the workload of computer image processing.
  • the method further includes: receiving a location attribute that the apparent feature has, the location attribute being used to indicate that the apparent feature is a global attribute.
  • the method further includes: acquiring a location attribute of the apparent feature of the character in a correspondence between the apparent feature and the location attribute stored in advance, where the location attribute is used
  • the indication of the apparent feature is a global attribute.
  • the method further includes: acquiring other apparent features associated with the apparent feature; a feature for indicating a type of the character, a type to which other characteristics associated with the characteristic of the apparent feature belong; acquiring a description attribute of the other apparent feature; correcting by a description attribute of the other apparent feature A description attribute of the apparent feature.
  • the description attribute of the apparent feature with the global attribute is modified by the description attribute associated with the apparent feature with the global attribute and the other characteristic features of the local attribute, and the description of the apparent feature having the global attribute is improved. The accuracy of the attribute.
  • acquiring other appearance features associated with the apparent feature including: querying the pre-stored appearance features and other apparent features Corresponding relationship; obtaining other apparent features associated with the apparent feature.
  • acquiring other appearance features associated with the apparent feature including: receiving information including the identifier of the other appearance feature; Other apparent features associated with the apparent features.
  • the acquiring the description attribute of the other apparent features includes: acquiring the other appearance a location feature of the feature, the other apparent feature being used to indicate a type to which other characteristics of the person's appearance belong, the other apparent features a position feature for indicating a position of a part of the character embodied by the other apparent feature in a preset character model, the other apparent feature having a local attribute for indicating the image processing
  • the processing manner of the target image by the device is a local processing; according to the location feature of the other apparent features, the target area is identified, the target area includes a part of the character; performing feature analysis on the target area, and identifying the location Describe the description properties of other apparent features of the character.
  • a location attribute that the apparent feature has is received, where the location attribute is used to indicate that the other appearance feature is a local attribute.
  • an eighth implementation manner acquiring, by using a pre-stored correspondence between the apparent feature and the location attribute, a location attribute of the other apparent feature of the character, where The location attribute is used to indicate that the other apparent features are local attributes.
  • the method further includes: moving the target area to a specified direction centering on the target area Obtaining one or more offset regions; performing feature analysis on the offset regions to identify other description attributes of the apparent features of the characters; describing attributes from the other apparent features and the other according to a preset algorithm Determining, in the description attribute, a target description attribute as a description attribute of the other apparent features, the target description attribute being a description attribute of the other apparent features and a description attribute closest to the target data among the other description attributes.
  • the method further includes: extending the offset area or the target area to the center of the offset area or the target area Obtaining one or more candidate regions; performing feature analysis on the candidate regions to identify other description attributes of other apparent features of the character; describing attributes from the other apparent features and the other according to a preset algorithm Determining a target description attribute in the description attribute, the target description attribute being a description attribute of the other apparent feature and a description attribute closest to the target data among the other description attributes as a description attribute of the other apparent features.
  • the target area or the offset area is extended to the periphery to adjust the position, and the target area that does not include the part of the character or only a part of the part including the person obtained by the unclear target image is extended by the target area.
  • the candidate area including the part of the person can be obtained, and the target area or the offset area of the incomplete person part obtained by the unclear target image is reduced, and the recognition attribute of the description attribute of the other apparent features is caused. Larger risks increase the accuracy of the description attribute identification of other apparent features.
  • the target area is moved in a specified direction to obtain one or more offset areas, centering on the target area
  • the method includes: dividing the target area into a plurality of block maps having the same shape and the plurality of block maps being continuous; centering on the target area, and Deviating the target area in one or more directions in one or more offset units to obtain the one or more offset areas, wherein each of the offset areas is in a block diagram as an offset unit
  • the size is the same as the size of the target area.
  • the acquiring the location feature of the other apparent feature comprises: receiving, including the other appearance Information of a location feature of the feature, the information being used to indicate location characteristics of the other apparent features.
  • the acquiring the location feature of the other apparent features includes: querying the other storage pre-stored Corresponding relationship between the apparent feature and the location feature; obtaining the location feature of the other apparent feature according to the other apparent feature and the corresponding relationship.
  • a fourth aspect provides a description attribute identifying apparatus for an apparent feature, comprising: respective modules for performing a description attribute identifying method for performing an apparent feature in the first aspect or any one of the possible implementations of the first aspect, the module It can be implemented by hardware or by software.
  • a fifth aspect provides a description attribute identifying apparatus for an apparent feature, comprising a processor, a memory, the memory storing computer instructions, the processor and the memory connection; the processor being configured to execute a computer in the memory An instruction to perform the method of the first aspect or any possible implementation of the first aspect.
  • a sixth aspect provides a description attribute identifying apparatus for an apparent feature, comprising: respective modules for performing a description attribute identifying method for performing an apparent feature in the second aspect or any of the possible implementations of the second aspect, the module It can be implemented by hardware or by software.
  • a seventh aspect provides a description attribute identifying apparatus for an apparent feature, comprising a processor, a memory, the memory storing computer instructions, the processor and the memory connection; and the processor for executing a computer in the memory An instruction to perform the method of performing the second aspect or any of the possible implementations of the second aspect.
  • An eighth aspect provides a description attribute identifying apparatus for an apparent feature, comprising: respective modules for performing a description attribute identifying method for performing an apparent feature in any of the possible implementations of the third aspect or the third aspect, the module It can be implemented by hardware or by software.
  • a ninth aspect provides a description attribute identifying device of an apparent feature, comprising a processor, a memory, the memory storing computer instructions, the processor and the memory connection; the processor being configured to execute a computer in the memory An instruction to perform the method of any of the third aspect or any of the possible implementations of the third aspect.
  • the apparent feature is used to indicate a type to which the characteristic of the appearance of the character belongs, and the apparent feature has a local attribute, and the local attribute And a method for instructing the image processing device to process the target image as a local process, and determining a location feature of the apparent feature of the target image to determine that the part of the character represented by the apparent feature is in advance
  • the position feature of the apparent feature is used to indicate a position of the part of the character embodied in the apparent feature in the preset character model, to identify the target according to the position feature a region, the target region including a portion of the character; and then performing feature analysis on the target region to identify a description attribute of the apparent feature of the character.
  • the target areas in which the parts of the characters represented by the apparent features in the target image are selected are selected as the identification of the feature analysis.
  • the area reduces the meaningless recognition area, simplifies the image processing operation process, saves the recognition time of the description attribute, and reduces the workload of computer image processing.
  • FIG. 1 is a composition diagram of an image processing system according to an embodiment of the present invention.
  • FIG. 2 is a structural diagram of an image processing apparatus 120 according to an embodiment of the present invention.
  • FIG. 3a is a flowchart of a method for identifying a description attribute of an apparent feature according to an embodiment of the present invention
  • FIG. 3b is a schematic diagram of determining a target area according to an embodiment of the present invention.
  • FIG. 3c is a schematic diagram of a target area after being moved according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a block diagram according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of another method for identifying attribute description of an apparent feature according to an embodiment of the present invention.
  • FIG. 6 is a structural diagram of a description attribute identifying apparatus of an apparent feature according to an embodiment of the present disclosure
  • FIG. 7 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present disclosure.
  • FIG. 8 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present disclosure.
  • FIG. 9 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present invention.
  • FIG. 10 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present disclosure.
  • FIG. 11 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present invention.
  • the image processing apparatus, apparatus and method provided in this embodiment are mainly applied to the field of pedestrian monitoring, and are used for identifying a pedestrian in a surveillance video, a description attribute of an apparent feature, and the apparent feature is used to indicate the appearance of a person in the monitoring.
  • the type to which the characteristic belongs for example, the apparent characteristics may be the color of the character's hair, the length of the hair, the skin color of the pedestrian, the height, the sex, the type of clothing being worn, and the type of package carried. Or in applications in the field of vehicle monitoring, the apparent characteristics may be the color of the vehicle, the license plate, the appearance of the driver of the vehicle, or the style of dressing or gender.
  • the description attribute is used to identify the characteristics of the appearance of the character, for example, the apparent feature is the skin color of the pedestrian, and the skin color includes a plurality of description attributes, respectively yellow skin or black skin or white skin.
  • FIG. 1 is a schematic diagram of an image processing system according to an embodiment of the present invention.
  • the system includes an initial image capturing device 110 and an image processing device 120, the initial image capturing device 110 and the image processing device 120 are communicatively coupled, the initial image capturing device 110 for monitoring and acquiring initial images of different shooting angles, the initial images including characters .
  • the initial image capturing device 110 transmits the initial image to the image processing device 120.
  • the image processing device 120 recognizes the outline of the person in the initial image to obtain a contour image surrounded by the outline.
  • the contour image includes the character.
  • the target image including the person in the contour image is acquired.
  • One way of obtaining the target image of the character in the contour image is that the image processing device 120 rotates the angle of the contour image in one plane according to the angle of the person in the preset character model, after the rotation angle
  • the contour image has the same angle as the character in the preset character model, and the character in the contour image belongs to the same character type as the character in the preset character model.
  • the preset character model is a preset human body model, which is an upright human body model.
  • the preset character model is a preset vehicle model.
  • the image processing device 120 receives the target image input by the user, and the target image is a target image obtained by the user rotating the contour image by the terminal device.
  • the terminal device may be the image processing device 120 provided in this embodiment.
  • the image processing device can be used as a terminal device for receiving an input instruction of the user, and rotating the contour image to obtain a target image.
  • the image processing device 120 After obtaining the target image, the image processing device 120 identifies whether the position attribute of the apparent feature has a global attribute or a local attribute, and the local attribute is used to instruct the image processing apparatus 120 to process the target image as a local processing.
  • the global attribute is used to indicate that the image processing device 120 processes the target image as a global process.
  • the image processing device 120 identifies a target area that matches the location feature of the apparent feature, performs feature analysis on the target area, and identifies the person's Description attribute of the apparent feature.
  • the target area includes a part of the person.
  • the image processing apparatus 120 performs feature analysis on the target image to identify a description attribute of the apparent feature of the person in the target image.
  • the image capturing apparatus 110 is a desktop computer, a server, a mobile computer, a mobile photographing apparatus, a handheld terminal device, or a wearable photographing apparatus having a function of acquiring an image such as photographing or photographing.
  • the image processing device 120 provided in this embodiment is a desktop computer, a server, a mobile computer, or a handheld terminal device.
  • FIG. 2 is a structural diagram of an image processing apparatus 120 according to an embodiment of the present invention.
  • the image processing device 120 provided in this embodiment is a computer, and includes a processor 11, a memory 12, an interface 14, and a communication bus 13.
  • the processor 11 and the memory 12 communicate via a communication bus 13.
  • the interface 14 is configured to communicate with the terminal device, and the interface 122 can receive the data 122 and information sent by the terminal device.
  • the data 122 includes a target image.
  • the memory 12 is for storing a program 121 for identifying a description attribute of an apparent feature of a person, and the program 121 includes a program for an image processing function.
  • the image processing function refers to a function of expressing a recognition result of a description attribute in an output probability form through an image processing model, and the image processing model may be a mathematical model for processing an image such as a convolutional neural network.
  • the memory 12 is also used to store data 122 and to store information transmitted over the interface 14.
  • Memory 12 includes volatile memory, non-volatile memory, or a combination thereof.
  • the volatile memory is, for example, a random access memory (RAM).
  • RAM random access memory
  • Non-volatile memory such as a floppy disk, a hard disk, a solid state disk (SSD), an optical disk, and the like, and various machine readable media that can store program code.
  • the processor 11 is configured to execute the program 121 to identify a description attribute of the apparent feature.
  • the processor 11 is configured to identify whether the location attribute of the apparent feature is a global attribute or a local attribute. If the location attribute is a local attribute, the processor 11 identifies a target area that matches the location feature of the apparent feature.
  • the target area performs feature analysis to identify a description attribute of the apparent feature of the person in the target image.
  • the position feature is used to indicate a position of the part of the character embodied by the apparent feature in a preset character model.
  • the target area includes a portion of the character embodied by the apparent feature. If it is a global attribute, the processor 11 performs feature analysis on the target image received through the interface 14 to identify a description attribute of the apparent feature of the person in the target image.
  • the processor 11 is one of the main devices of the image processing device 120, and functions primarily to interpret computer instructions and to process data in computer software.
  • the processor 11 may be a central processing unit (CPU), or a Complex Programmable Logic Device (CPLD) or a Field-Programmable Gate Array (Field-Programmable Gate Array). FPGA).
  • FIG. 3a is a flowchart of a method for identifying a description attribute of an apparent feature according to an embodiment of the present invention. As shown in FIG. 3a, the present embodiment is mainly described in the field of pedestrian monitoring. If the appearance attribute of the character has a local attribute, the description attribute identification method of the apparent feature of the character includes the following step.
  • S320 Acquire a location feature of an apparent feature of the target image.
  • the apparent feature has a local attribute for indicating that the image processing device processes the target image as a local process.
  • the position feature of the apparent feature is used to indicate the position of the part of the character embodied in the apparent feature in the preset character model.
  • a method for acquiring a location feature of an apparent feature is: receiving information including a location feature of the apparent feature, acquiring a location feature of the apparent feature, the information being used to indicate a location feature of the apparent feature .
  • Another way of obtaining the location feature of the apparent feature is to query the correspondence between the apparent feature and the location feature, which is pre-stored in the image processing device 120, according to the apparent feature and the corresponding relationship. Obtaining a location feature of the apparent feature.
  • the preset character model in this embodiment is a preset human body model, and the position feature of the apparent feature is used to indicate the position of the body part embodied in the apparent feature in the preset human body model.
  • the position of the body part in the preset human body model is determined in two ways.
  • One way is to determine a position ratio of a contour of the part of the human body in the preset human body model to a contour of the preset human body model, and obtain a contour and a position of the part of the human body in the preset human body model. Defining a projection relationship of a contour of the human body model, determining the position ratio to determine a position of the body part in the preset human body model, that is, determining a position feature of the apparent feature.
  • Another way is to determine a geometrical area of the part including the human body, a position ratio in a geometric area including the preset human body model, to determine a position of the part of the human body in a preset human body model, that is, A positional feature of the apparent feature is determined.
  • the geometric region including the part of the human body is a height line between the highest part and the lowest part of the body part in the preset human body model, and the leftmost part of the body part in the preset human body model The first symmetrical geometric region enclosed by the width line between the rightmost portion.
  • the geometric region including the preset human body model is a height line between a highest portion and a lowest portion of the preset human body model, and a width line between a leftmost end portion and a rightmost end portion of the preset human body model
  • the ratio may be determined according to a ratio of a height of the first symmetric geometric region to a height of the second symmetric geometric region or a ratio of a width of the first symmetric geometric region to a width of the second symmetric geometric region. Position ratio.
  • the position ratio is determined jointly by a ratio of a height of the first symmetrical geometric region to a height of the second symmetrical geometric region, and a ratio of a width of the first symmetrical geometric region to a width of the second symmetrical geometric region.
  • the position ratio is determined according to a projection relationship of the first symmetric geometric region in the second symmetric geometric region.
  • the target area includes a part of the person.
  • the image processor device 120 scales the contour including the projection relationship, and the difference between the scaled contour including the projection relationship and the contour of the person in the target image is within a certain error range, and then, the image processing device 120 identifies a target area including a part of the human body according to the projection relationship.
  • the following is an example of how to determine the position ratio and how to identify the target area based on the ratio of the height of the first symmetrical geometric region to the height of the second symmetrical geometric region.
  • the apparent feature is hair
  • the descriptive attributes of the apparent features include short hair and long hair
  • the target image is obtained according to a position ratio of the first symmetrical geometric region in an upper 1/2 region of the second symmetric geometric region including the preset human body model.
  • the image is divided into upper and lower partial regions, and the upper region of the target image is selected as the target region.
  • FIG. 3b is a schematic diagram of determining a target area according to an embodiment of the present invention. As shown in the right image of FIG. 3b, the target image is divided into upper and lower partial regions, and the upper region of the target image is selected as the target region.
  • the position ratio of the first symmetric geometric region in the second symmetric geometric region can be expressed by the combination of the upper left coordinate and the lower right coordinate, for example, by two coordinates [0, 0], [w, h/2] The position ratio of the upper 1/2 area. Then, the image processing apparatus 120 determines the position of the target area as the upper left corner coordinate and the upper 1/2 area of the target image surrounded by the lower right corner coordinate by the combination of the upper left corner coordinate and the lower right corner coordinate. If the position ratio is determined according to the height ratio and the position feature is acquired, the width of the target area may default to the width w of the target image, and h represents the vertical height of the target image.
  • the position ratio can be expressed by the combination of the coordinates of the upper left corner and the coordinates of the lower right corner, but also the position ratio can be expressed by other forms, for example, according to the obtained "upper 1/2" or "head and neck” and other formal parameters, the position algorithm is adopted. Calculate, or find the correspondence between the pre-stored formal parameters and the combination of the upper left corner coordinates and the lower right corner coordinates, obtain the combination of the upper left corner coordinates and the lower right corner coordinates to determine the position of the target region.
  • the apparent feature is a parcel
  • the description attributes of the apparent features include a shoulder bag, a diagonal cross bag, and a shoulder bag, by confirming that the first symmetric geometric region is in the middle of the second symmetric geometric region including the preset human body model.
  • Positional proportion of the 3 regions obtaining the position feature when the apparent feature is a package, and the target image according to the position ratio of the first symmetric geometric region in the middle 1/3 region of the second symmetric geometric region including the preset human body model It is divided into upper, middle and lower parts, and the middle area of the target image is selected as the target area.
  • the position ratio of the middle 1/3 area can be determined by the combination of the coordinates of the upper left corner and the coordinates of the lower right corner, that is, by the coordinates of [0, h/3], [w, 2h/3].
  • the target image is divided into upper, middle and lower partial regions, and the central region of the target image is selected as the target region.
  • the apparent feature is the lower body dress
  • the description attributes of the apparent features include trousers, shorts, short skirts, long skirts, culottes, etc.
  • the target image is divided into upper, middle and lower partial regions, and the lower region of the target image is selected as the target region.
  • the positional proportion of the first symmetric geometric region in the second symmetric geometric region may be determined by a combination of a plurality of coordinates.
  • the position ratio is determined together, and the upper left corner can be The coordinates, the lower left coordinate, the upper right coordinate, and the lower right coordinate jointly determine the position ratio to determine the position of the target area.
  • the coordinate combination includes the upper left coordinate [w/4, h/3], the lower left coordinate [w/3, 2h/3], the upper right coordinate [3w/4, h/3], and the lower right coordinate [2w/ 3, 2h/3], the target area determined by the coordinate combination is an inverted trapezoidal area in the middle of the target image.
  • the positional ratio of the first symmetric geometric region in the second symmetric geometric region may be determined by a combination of at least two coordinates of the upper left corner coordinate, the lower left corner coordinate, the upper right corner coordinate, and the lower right corner coordinate.
  • S340 Perform feature analysis on the target area to identify a description attribute of an apparent feature of the character in the target image.
  • the image processing technology is used to perform feature analysis on the target area, and the recognition result of the description attribute represented by the probabilistic form is obtained.
  • the description attribute of the appearance characteristic is that the length of the hair has at least two, respectively, a long hair description attribute.
  • the short hair description attribute, the recognition result of the description attribute expressed in the probabilistic form is a combination of two probabilities, namely (short hair description attribute 0.9, long hair short description attribute 0.1), and then the probability is identified according to the comprehensive judgment rule A description attribute that conforms to the comprehensive decision rule as a description attribute of the apparent feature of the person in the target image.
  • the comprehensive decision rule may be a description attribute that is close to the target data after comparison or query, for example, the value of the target data is 1, the description attribute of the apparent feature of the character is short hair, and the short hair description attribute with a selection probability of 0.9 is 0.9. , a description attribute of the apparent feature of the person in the target image.
  • the image processing device 120 acquires a location attribute that the apparent feature of the character has, the location attribute being used to indicate that the apparent feature is a local attribute.
  • the method for obtaining the location attribute of the apparent feature is to receive information that is sent by the terminal device and includes a location attribute that the apparent feature has, to obtain a location attribute that the apparent feature has. For example, when the apparent feature is hair, the description attribute of the hair includes long hair and short hair, and the apparent feature sent by the user is information that the position attribute of the hair is a local attribute, and the position attribute of the hair is confirmed to be a local attribute.
  • the description attribute of the parcel type includes a backpack or a clutch
  • the receiving characteristic sent by the user is information that the location attribute of the parcel type is a local attribute, and the position attribute of the parcel type is confirmed as Local attribute.
  • the description attribute of the lower body dress includes the pants or the skirt
  • the appearance feature sent by the user is the information that the position attribute of the lower body dress is the local attribute, and the position attribute of the lower body dress is confirmed as the local attribute.
  • the image processing device 120 acquires a location attribute of the apparent feature of the character in a correspondence relationship between the apparent feature and the location attribute stored in advance, the location The attribute is used to indicate that the apparent feature is a local attribute.
  • image processing device 120 obtains an apparent feature of the character, a description attribute of the apparent feature.
  • the image processing device 120 receives the apparent features of the user's text input and the description attributes of the apparent features through the interface 14.
  • the image processing device 120 receives the apparent feature of the user's input on the visual interface and the description attribute of the apparent feature.
  • Another method is: receiving information about the description attribute including the apparent feature and the apparent feature sent by the terminal device, to obtain an appearance feature of the character, and a description attribute of the apparent feature.
  • FIG. 3c is a schematic diagram of a target area after being moved according to an embodiment of the present invention. As shown in FIG.
  • the image processing device 120 performs image feature analysis on the offset region, obtains other description attributes of the apparent feature, and determines from the description attribute and the other description attributes according to a preset algorithm.
  • the target description attribute is a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the value of the target data may be 1, and the preset algorithm is to identify the target description attribute of the highest probability representation as a description of the apparent characteristics of the person in the target image. Attributes. For example, it may be a description attribute with the highest probability of selection after comparison or query, as a description attribute of the apparent feature of the person in the target image. Another example is the same description genus that matches the target area and each offset area. Sex probability is the sum of the probabilities obtained by the summation operation, and then the mean value of the sum of the probabilities or the support vector machine process is selected, and the target description attribute represented by the highest probability is selected as the appearance of the person in the target image. Description attribute of the feature.
  • the target data may be a standard parameter of a standard image, and may be a standard parameter obtained by performing different types of feature analysis on the standard image.
  • the specific implementation manner of offsetting the target area to obtain one or more offset areas according to the target area is: dividing the target area into several block diagrams, Several tile maps have the same shape and the several tile maps are contiguous.
  • the shape of the block map may be a horizontal strip shape, a vertical strip shape, or a vertical grid or a horizontal grid.
  • FIG. 3d is a schematic diagram of a block diagram according to an embodiment of the present invention. As shown in FIG. 3d, the area enclosed by the thick line in each image is the target area, and the block diagram provided by the embodiment of the present invention may be the vertical strip diagram shown on the left side of FIG. 3d, or may be FIG. 3d.
  • the horizontal strip diagram shown in the middle may also be a vertical or horizontal grid block diagram shown on the right side of FIG. 3d.
  • the embodiment of the present invention does not limit the shape of the block diagram.
  • the target area is offset center, one of the block maps is used as an offset unit, and the target area is one or more offset units to one or more
  • the direction is offset to obtain one or more offset regions, wherein each of the offset regions has the same size as the target region.
  • the offset region or the target region is extended to the periphery according to one or more different preset sizes to obtain a candidate region including the offset region or the target region.
  • the size of the candidate area is larger than the size of the offset area.
  • the image processing technology is used to perform feature analysis on the candidate region, and other description attributes of the apparent feature are obtained, and the target description attribute is determined from the description attribute and the other description attribute according to a preset algorithm, and the target description is performed.
  • An attribute is a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the other description attribute includes a description attribute of the apparent feature matching the candidate area obtained after performing feature analysis on the candidate area, or an apparent feature matching the offset area obtained by performing feature analysis on the offset area. Describe the attribute.
  • the value of the target data may be 1, and the preset algorithm is to identify the target description attribute of the highest probability representation as a description of the apparent characteristics of the person in the target image. Attributes. For example, it may be a description attribute with the highest probability of selection after comparison or query, as a description attribute of the apparent feature of the person in the target image.
  • the apparent feature is that the description attribute of the hair includes short hair and long hair.
  • the value of the target data is 1 as the standard of the short hair description attribute
  • the target is The region and an offset region and the two candidate regions are respectively characterized, and the probability of the target region and the offset region and the description attribute corresponding to each candidate region are obtained as follows, [short hair description attribute 0.7, long hair description attribute 0.3] , [short hair description attribute 0.95, long hair description attribute 0.05], [short hair description attribute 0.6, long hair description attribute 0.4], [short hair description attribute 0.45, long hair description attribute 0.55], then selected by max operation
  • the result is the first description attribute 0.95, ie the description attribute of the identified hair is a short hair description attribute.
  • the target data may be a standard parameter of a standard image, and may be a standard parameter obtained by performing different types of feature analysis on the standard image.
  • the parts of the human body of the target area, the parts of the human body of each of the offset areas, and the parts of the human body in each candidate area may be respectively Enter Line outline recognition.
  • the target area or the offset area or the candidate area having the same contour shape as the part of the human body of the preset human body model is recognized, the target area, the offset area, or the candidate area is retained, and the human body with the preset human body model is retained.
  • the image processing apparatus 120 determines the target description attribute from the description attribute and the other description attributes according to a preset algorithm.
  • the target description attribute is a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the preset algorithm may refer to the other description attributes of the apparent features obtained after performing feature analysis on the offset area and the candidate area based on the description of FIG. 3a, and performing a preset algorithm on the description attribute and other description attributes to achieve the target description. The details of the properties are not repeated here.
  • the offset regions corresponding to the preset number may be selected randomly or in a predetermined order for feature analysis or contour recognition of the human body part, and may be randomly or according to after obtaining a plurality of candidate regions.
  • a candidate area that meets the preset number is selected for feature analysis or contour recognition of the human body part, and the preset number of the selected candidate areas and the preset number of the offset areas may be the same or different.
  • the target area may be subjected to feature analysis according to the user's needs, or the target area and the preset number of offset areas may be subjected to feature analysis or contour recognition according to the user's selection, or may be selected according to the user's selection.
  • the obtained candidate region is the target image, that is, when the width and height of the offset region are recognized to be the same as the height and width of the target image, the obtained candidate region is the target image.
  • FIG. 4 is a flowchart of another method for identifying attribute description of an apparent feature according to an embodiment of the present invention.
  • the description attribute identification method of the apparent feature of the character is as shown in FIG. 4 . Specifically, the following steps are included.
  • step S410 Acquire a target image, where the target image includes a character.
  • the target image includes a character.
  • S420. Perform feature analysis on the target image to identify a description attribute of an apparent feature of the character in the target image.
  • the apparent feature has a global attribute, which is used to indicate that the image processing device processes the target image as a global process.
  • Image processing technology is used to analyze the target image, and the recognition result of the description attribute expressed in the probabilistic form is obtained.
  • the description attribute of the probability conforming to the comprehensive judgment rule is identified as the apparent feature of the character in the target image.
  • Description attribute For example, it may be a description attribute with the highest probability of selection after comparison or query, as a description attribute of the apparent feature of the person in the target image.
  • the method for identifying the attribute of the apparent feature, before the step S420, before the step S420, further includes: the image processing device 120 acquires the position of the apparent feature of the character, based on the above-described description An attribute, the location attribute is used to indicate that the apparent feature is a global attribute.
  • the method for obtaining the location attribute of the apparent feature is to receive information that is sent by the terminal device and includes a location attribute that the apparent feature has, to obtain a location attribute that the apparent feature has.
  • the description attribute identification method of the apparent feature includes, before step S420, the apparent feature and location pre-stored by the image processing apparatus 120. Obtaining a location attribute of the apparent feature of the character in a correspondence between the attributes, where the location attribute is used to indicate that the apparent feature is a global attribute.
  • the image processing device 120 acquires the apparent feature of the character, the description attribute of the apparent feature.
  • the image processing device 120 receives the apparent feature of the user's text input and the description attribute of the apparent feature through the interface 14.
  • Obtaining an apparent feature of the character, and another manner of describing the attribute of the apparent feature is: the image processing device 120 receives a description of the apparent feature input by the user on the visual interface and the appearance of the apparent feature Attributes.
  • Obtaining the appearance feature of the character, and another manner of describing the attribute of the appearance feature is: receiving, by the terminal device, information that includes the appearance feature and the description attribute of the appearance feature, to obtain the obtained An apparent feature of a character, a description attribute of the apparent feature.
  • the description attribute identifying method of the apparent feature after step S420, further includes: acquiring other apparent features associated with the apparent feature;
  • the feature is used to represent the appearance of the character, the type to which other characteristics associated with the characteristics of the apparent feature belong; the description attribute of the other apparent feature; the description attribute of the other apparent feature Correct the description attribute of the apparent feature.
  • An implementation manner of obtaining other apparent features associated with the apparent feature in the embodiment is: querying a correspondence between the apparent feature and other apparent features pre-stored in the image processing device Obtaining other apparent features associated with the apparent features.
  • Another implementation for obtaining other apparent features associated with the apparent features is to receive information including the identification of the other apparent features to obtain other apparent features associated with the apparent features.
  • the description attribute of the apparent feature is modified by the description attribute of the other apparent feature associated with the apparent feature, and the specific implementation manner is that the correlation weighting algorithm is used to represent the The correlation degree of the association relationship between the apparent feature and the other apparent features is used as the weighted weight in the correlation weighting algorithm, and the description attribute obtained by weighting the description attribute of the apparent feature is used as the description attribute in the target image.
  • the description attribute of the apparent feature of the character In this embodiment, the other apparent features associated with the apparent features having global attributes have local attributes.
  • the description attribute identification method of the apparent feature further includes: inputting the target image or the target area or the offset area or the candidate area to the mathematical model,
  • the description attribute of the approximate target data output by the mathematical model is used as a description attribute of the apparent feature of the person in the target image.
  • the mathematical model is a mathematical model obtained by training and correcting the calculation model using a training data set.
  • the training data set includes description attributes of other apparent features of other characters in the image and other images, including the target area or the offset area or the candidate area in the target image or other characters other than the target image Other images.
  • Other apparent features of the character in the other images are associated with apparent features of the target image having global attributes.
  • other apparent features of the person in the other images have global attributes or local attributes.
  • Other images include people and people in the target image Things belong to the same person type.
  • the training model is used to train the calculation model, and the difference between the description attribute of the other apparent features of the character in other images and the target data is obtained, and then the calculation parameters of the calculation model are adjusted according to the gap, and the adjustment calculation is obtained.
  • a calculation model after the parameter, the description attribute of the apparent feature of the character of the other image is recognized by the adjusted calculation model, and the training modification of the calculation model is based on the description of the apparent feature of the character of the other images obtained by the calculation model
  • the difference between the attribute and the target data is less than or equal to the target error.
  • FIG. 5 is a flowchart of another method for identifying attribute description of an apparent feature according to an embodiment of the present invention.
  • the method for identifying the description attribute of the apparent feature of the character by using multiple position features is as described in FIG. 5 , and specifically includes the following steps.
  • step S510 Acquire a target image, where the target image includes a character.
  • the target image includes a character.
  • step S520 Acquire a first location feature and a second location feature of the apparent feature of the character.
  • a plurality of positional features of the apparent features of the character may also be obtained, each of the positional features being used to indicate a position of each part of the character embodied in the apparent feature in a preset character model, the apparent feature A type used to indicate the characteristics of the appearance of the character.
  • the first position feature is used to indicate a position of the first part of the character embodied by the apparent feature in a preset character model
  • the second position feature is used to represent the apparent feature a position of the second part of the character in the preset character model, the apparent feature having a local attribute, the local attribute being used to indicate that the image processing device processes the target image as a local deal with.
  • the location feature of the apparent feature of the character in this step reference may be made to the details implemented in step S320 based on the embodiment shown in FIG. 3a, and details are not described herein again.
  • the maximum distance is less than a preset threshold.
  • the maximum distance includes a maximum vertical height between two locations (the first location and the second location) and/or a maximum width between the two locations (the first location and the second location).
  • the preset threshold is a value that ensures that the analysis of the maximum distance is higher than the efficiency of analyzing all target images. If there are multiple position features, the maximum distance is greater than or equal to the maximum vertical height between any two locations, or greater than or equal to the maximum width between any two locations.
  • the target area includes the first part and the second part. If the maximum distance is the maximum vertical height between the two parts, the width of the default target area is the width of the target image. If the maximum distance is the maximum width between the two parts, the height of the default target area is the height of the target image. . If the maximum distance includes the maximum vertical height and the maximum width between the two parts, the height of the target area is the maximum vertical height between the two parts, and the width of the target area is the maximum width between the two parts. If there are multiple position features, the target area includes each of the characters embodied by the apparent features.
  • S550 Perform feature analysis on the target area, and identify a description attribute of an apparent feature of the character in the target image.
  • the location attribute is used to indicate that the apparent feature is a local attribute.
  • the method further includes: the image processing device 120 acquires an apparent feature of the character, and a description attribute of the apparent feature.
  • the image processing device 120 acquires an apparent feature of the character, and a description attribute of the apparent feature.
  • obtaining a candidate region according to the offset region obtaining other description attributes according to the candidate region, and determining specific implementation details of the target description attribute according to the description attribute and a plurality of other description attributes, which may be referred to in the embodiment shown in FIG. 3a. The specific implementation details will not be described here.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • a method for describing attribute identification of an apparent feature provided according to an embodiment of the present invention is described in detail above with reference to FIGS. 3a through 5, and an apparent feature provided in accordance with an embodiment of the present invention will be described below with reference to FIGS. 6 through 11.
  • FIG. 6 is a structural diagram of a description attribute identifying device for an apparent feature according to an embodiment of the present invention.
  • the description attribute identifying device 610 of the apparent feature provided by the embodiment of the present invention is The description is based on the description attribute identification method of the apparent feature shown in FIG. 3a, which includes a processor 611 and a memory 612.
  • the memory 612 stores computer instructions, and the processor 611 is coupled to the memory 612.
  • the processor 611 is configured to execute computer instructions in the memory 612 to perform the following steps:
  • a target image is obtained, the target image including a character.
  • the processor 611 obtains the details of the target image, and may refer to the description of step S310 shown in FIG. 3a, and details are not described herein again.
  • the apparent feature is used to indicate a type to which the characteristic of the appearance of the character belongs, and a location feature of the apparent feature is used to represent the apparent feature
  • the processor 611 acquires the details of the location feature of the apparent feature of the target image. For details, refer to the description of step S320 shown in FIG. 3a, and details are not described herein again.
  • a target area is identified based on the location feature, the target area including a part of the character.
  • the processor 611 identifies the details of the target area according to the location feature. For details, refer to the description of step S330 shown in FIG. 3a, and details are not described herein again.
  • the processor 611 performs feature analysis on the target area to identify the details of the description attribute of the apparent feature of the character. For details, refer to the description of step S330 shown in FIG. 3a, and details are not described herein again.
  • the processor 611 is further configured to receive a location attribute that the apparent feature has, where the location attribute is used to indicate that the apparent feature is a local attribute.
  • the processor 611 is further configured to acquire a location attribute of the apparent feature of the character in a correspondence between the apparent feature and the location attribute stored in advance, where the location The attribute is used to indicate that the apparent feature is a local attribute.
  • processor 611 is further configured to perform the following steps:
  • a target description attribute from the description attribute and the other description attribute, the target description attribute being a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the processor 611 moves the target area in a specified direction to obtain one or more offset areas, and performs feature analysis on the offset area to identify the person's table.
  • the details of determining the target description attribute from the description attribute and the other description attributes according to a preset algorithm may refer to the description of the related embodiment after step S330 shown in FIG. 3a, where Let me repeat.
  • the processor 611 is further configured to divide the target area into a plurality of block diagrams, the several block diagrams having the same shape and the plurality of block diagrams Between consecutive; centered on the target area, and offsetting the target area in one or more directions by one or more offset units in a one-block map as an offset unit to obtain the One or more offset regions, wherein each of the offset regions has the same size as the target region.
  • the processor 611 is further configured to receive information including a location feature of the apparent feature, where the information is used to indicate a location feature of the apparent feature.
  • the processor 611 is further configured to query a pre-stored correspondence between the apparent feature and a location feature, and obtain the appearance according to the apparent feature and the corresponding relationship.
  • the location feature of the feature is further configured to query a pre-stored correspondence between the apparent feature and a location feature, and obtain the appearance according to the apparent feature and the corresponding relationship.
  • the description attribute identifying apparatus 610 of the apparent features according to the embodiment of the present invention may correspond to the image processing apparatus 120 in the embodiment of the present invention, and may correspond to the method illustrated in FIG. 3a according to an embodiment of the present invention.
  • the above-mentioned and other operations and/or functions of the respective modules in the description attribute identification device 610 of the corresponding features are respectively implemented in order to implement the respective processes related to the method shown in FIG. 3a, and are not described herein again for brevity.
  • FIG. 7 is a structural diagram of another description attribute identifying device of an apparent feature according to an embodiment of the present invention.
  • the description attribute identifying device 710 of the apparent feature provided by the embodiment of the present invention is shown in FIG. It is implemented based on the description attribute identification method of the apparent feature shown in FIG. 5, which includes a processor 711 and a memory 712.
  • the memory 712 stores computer instructions, and the processor 711 is coupled to the memory 712.
  • the processor 711 is configured to execute computer instructions in the memory 712 to perform the following steps:
  • a target image is obtained, the target image including a character.
  • the details of the target image acquired by the processor 711 can be referred to the description of step S310 shown in FIG. 3a, and details are not described herein again.
  • the processor 711 obtains the details of the location feature of the apparent feature of the target image. For details, refer to the description of step S320 shown in FIG. 3a, and details are not described herein again.
  • the processor 711 acquires details of the maximum distance between the first part and the second part according to the first position feature and the second position feature, which can be referred to FIG. The description of step S530 will not be repeated here.
  • the processor 711 Identifying a target area based on the maximum distance, the target area including the first portion and the second portion.
  • the processor 711 identifies the details of the target area according to the maximum distance, and may refer to the description of step S540 shown in FIG. 5, and details are not described herein again.
  • the processor 711 performs feature analysis on the target area to identify details of the description attribute of the apparent feature of the person in the target image, which can be referred to in the description of step S550 shown in FIG. 5 . No longer.
  • the maximum distance is less than a preset threshold.
  • the processor 711 is further configured to receive a location attribute that the apparent feature has, where the location attribute is used to indicate that the apparent feature is a local attribute.
  • the processor 711 is further configured to acquire a location attribute of the apparent feature of the character in a correspondence between the apparent feature and the location attribute stored in advance, where the location The attribute is used to indicate that the apparent feature is a local attribute.
  • the processor 711 is further configured to receive information including a first location feature and a second location feature of the apparent feature, where the information is used to indicate the apparent feature First position feature and second position feature.
  • the processor 711 is further configured to query a correspondence between the pre-stored apparent features and the first location feature and the second location feature, respectively, according to the appearance feature and the location The correspondence relationship acquires the first location feature and the second location feature of the apparent feature.
  • processor 711 is further configured to perform the following steps:
  • a target description attribute from the description attribute and the other description attribute, the target description attribute being a description attribute of the description attribute and the other description attribute that is closest to the target data.
  • the processor 711 moves the target area in a specified direction to obtain one or more offset areas, and performs feature analysis on the offset area to identify the person's table. Viewing other description attributes of the feature, determining the target description genus from the description attribute and the other description attributes according to a preset algorithm For details of the nature, reference may be made to the description of the related embodiment after step S330 shown in FIG. 3a, and details are not described herein again.
  • the processor 711 is further configured to divide the target area into a plurality of block diagrams, the several block diagrams having the same shape and the plurality of block diagrams Between consecutive; centered on the target area, and offsetting the target area in one or more directions by one or more offset units in a one-block map as an offset unit to obtain the One or more offset regions, wherein each of the offset regions has the same size as the target region.
  • the description attribute identifying apparatus 710 of the apparent feature may correspond to the image processing apparatus 120 in the embodiment of the present invention, and may correspond to the method shown in FIG. 5 according to an embodiment of the present invention.
  • the above-mentioned and other operations and/or functions of the respective modules in the description attribute identification device 710 of the corresponding features are respectively implemented in order to implement the respective processes related to the method shown in FIG. 5, and are not described herein again for brevity.
  • FIG. 8 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present invention.
  • the description attribute identifying apparatus 810 of the apparent feature provided by the embodiment of the present invention is provided. It is implemented based on the description attribute identification method of the apparent feature shown in FIG. 3a, and includes an obtaining unit 811 and a processing unit 812, and the processing unit 812 is connected to the obtaining unit 811.
  • the function of each module in the description attribute identifying means 810 is described in detail below:
  • the acquiring unit 811 is configured to acquire a target image, where the target image includes a character.
  • the function of the acquisition unit 811 to acquire the target image may be implemented by the interface 14 in the image processing device 120.
  • the function of the acquisition unit 811 to acquire the target image may be acquired by the image processing device 120 described in step S310 shown in FIG. 3a, and details are not described herein again.
  • the acquiring unit 811 is further configured to acquire a location feature of an apparent feature of the target image, where the appearance feature is used to indicate a type to which a feature of the appearance of the character belongs, and a location feature of the apparent feature, Means for indicating a position of a part of the character embodied by the apparent feature in a preset character model, the apparent feature having a local attribute for indicating that the image processing device targets the target The image is processed in a localized manner.
  • the function of the acquisition unit 811 to acquire the location feature of the apparent feature of the target image may be implemented by the interface 14 in the image processing device 120.
  • the acquiring unit 811 acquires the function of the position feature of the apparent feature of the target image, and may acquire the specific details of the position feature of the apparent feature of the target image by referring to the image processing device 120 described in step S320 illustrated in FIG. 3a. I won't go into details here.
  • the processing unit 812 is configured to identify a target area according to the location feature, where the target area includes a part of the character. In the present embodiment, the processing unit 812 identifies the function of the target area according to the location feature, which can be implemented by the processor 11 in the image processing device 120. The processing unit 812 identifies the function of the target area according to the location feature.
  • the image processing device 120 described in step S330 shown in FIG. 3a can identify the specific details of the target area according to the location feature, and details are not described herein again.
  • the processing unit 812 is further configured to perform feature analysis on the target area to identify a description attribute of the apparent feature of the character.
  • the processing unit 812 performs feature analysis on the target area, and the function of identifying the description attribute of the apparent feature of the character may be implemented by the processor 11 in the image processing device 120.
  • the processing unit 812 performs feature analysis on the target area to identify the description attribute of the apparent feature of the character.
  • the image processing device 120 described in step S340 shown in FIG. 3a may perform feature analysis on the target area. The specific details of the description attribute of the apparent feature of the character are not described here.
  • the acquiring unit 811 is further configured to receive a location attribute that the apparent feature has, where the location attribute is used to indicate that the apparent feature is a local attribute.
  • the obtaining unit 811 The function of receiving the location attribute possessed by the apparent feature may be implemented by interface 14 in image processing device 120.
  • the acquiring unit 811 is further configured to acquire, in a correspondence relationship between the apparent feature and the location attribute stored in advance, a location attribute that is included in an apparent feature of the character, where the location The attribute is used to indicate that the apparent feature is a local attribute.
  • the acquiring unit 811 acquires the function of the location attribute of the apparent feature of the character in the correspondence between the apparent feature and the location attribute stored in advance, and may pass through the processor in the image processing device 120. 11 achieved.
  • the processing unit 812 is further configured to move the target area to a specified direction centering on the target area to obtain one or more offset areas.
  • the function of the processing unit 812 moving the target area in the specified direction to obtain one or more offset areas centering on the target area may be implemented by the processor 11 in the image processing apparatus 120. .
  • the processing unit 812 is further configured to perform feature analysis on the offset area to identify other description attributes of the apparent features of the character.
  • the function of the processing unit 812 moving the target area in the specified direction to obtain one or more offset areas centering on the target area may be implemented by the processor 11 in the image processing apparatus 120. .
  • the processing unit 812 is further configured to determine, according to a preset algorithm, a target description attribute from the description attribute and the other description attribute, where the target description attribute is the closest target among the description attribute and the other description attribute The description attribute of the data.
  • the processing unit 812 determines the function of the target description attribute from the description attribute and the other description attributes according to a preset algorithm, which may be implemented by the processor 11 in the image processing apparatus 120.
  • the processing unit 812 is further configured to divide the target area into a plurality of block diagrams.
  • the processing unit 812 divides the target area into functions of a plurality of block maps, which can be implemented by the processor 11 in the image processing device 120.
  • the processing unit 812 is further configured to: with the target area as a center, and offset the target area in one or more directions by one or a plurality of offset units, with one block map as an offset unit.
  • the one or more offset regions are obtained, wherein each of the offset regions has the same size as the target region.
  • the processing unit 812 is centered on the target area, and offsets the target area in one or more directions by one or a plurality of offset units with one block map as an offset unit.
  • the function of obtaining the one or more offset regions can be implemented by the processor 11 in the image processing device 120.
  • the acquiring unit 811 is further configured to receive information including a location feature of the apparent feature, where the information is used to indicate a location feature of the apparent feature.
  • the function of the acquisition unit 811 receiving the information including the location feature of the apparent feature may be implemented by the interface 14 in the image processing device 120.
  • the acquiring unit 811 is further configured to query a correspondence between the apparent feature and the location feature stored in advance; and obtain the appearance according to the apparent feature and the corresponding relationship The location feature of the feature.
  • the obtaining unit 811 queries the correspondence relationship between the apparent feature and the location feature stored in advance; and the function of acquiring the location feature of the apparent feature according to the apparent feature and the corresponding relationship may be adopted
  • the processor 11 in the image processing device 120 is implemented.
  • the description attribute identifying apparatus 810 of the apparent feature of the embodiment of the present invention may be implemented by an Application Specific Integrated Circuit (ASIC) or a Programmable Logic Device (PLD), and the PLD is implemented. It can be a Complex Programmable Logic Device (CPLD) or a Field-Programmable Gate Array (FPGA). Generic Array Logic (GAL) or any combination thereof.
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • GAL Generic Array Logic
  • the description attribute identifying means 810 of the apparent features may correspond to performing the methods described in the embodiments of the present invention, and the above-described and other operations of the respective elements in the description attribute identifying means 810 of the apparent features and/or
  • the functions are respectively implemented in order to implement the method in FIG. 3a and the corresponding processes related to the method in FIG. 3a. For brevity, no further details are provided herein.
  • FIG. 9 is a structural diagram of another description attribute identifying apparatus of an apparent feature according to an embodiment of the present invention.
  • the description attribute identifying apparatus 910 of the apparent feature provided by the embodiment of the present invention is shown in FIG. It is implemented based on the description attribute identification method of the apparent feature shown in FIG. 5, and includes an obtaining unit 911 and a processing unit 912, and the processing unit 912 is connected to the obtaining unit 911.
  • the function of each module in the description attribute identifying means 910 is described in detail below:
  • the acquiring unit 911 is configured to acquire a target image, where the target image includes a character.
  • the function of the acquisition unit 911 to acquire the target image can be implemented by the interface 14 in the image processing device 120.
  • the function of the acquisition unit 911 to acquire the target image may be acquired by the image processing device 120 described in step S310 shown in FIG. 3a, and details are not described herein.
  • the acquiring unit 911 is further configured to acquire a first location feature and a second location feature of the appearance feature of the character, where the appearance feature is used to indicate a type to which the feature of the appearance of the character belongs, a position feature for indicating a position of the first portion of the character embodied in the apparent feature in a preset character model, the second position feature being used to represent the character embodied by the apparent feature The position of the second part in the preset character model, the apparent feature having a local attribute, the local attribute being used to indicate that the image processing device processes the target image as a local process.
  • the function of acquiring the first location feature and the second location feature of the apparent feature of the character by the obtaining unit 911 can be implemented by the interface 14 in the image processing device 120.
  • the acquiring unit 911 acquires the functions of the first position feature and the second position feature of the apparent feature of the character, and may acquire the first feature of the person's apparent feature by referring to the image processing device 120 described in step S520 of FIG. The specific details of the location feature and the second location feature are not described here.
  • the processing unit 912 is configured to acquire a maximum distance between the first portion and the second portion according to the first location feature and the second location feature. In this embodiment, the processing unit 912 acquires the maximum distance between the first portion and the second portion according to the first position feature and the second position feature, and may pass through the image processing device 120.
  • the processor 11 is implemented.
  • the processing unit 912 acquires the function of the maximum distance between the first part and the second part according to the first position feature and the second position feature, and may refer to the image described in step S530 shown in FIG.
  • the processing device 120 acquires specific details of the maximum distance between the first portion and the second portion according to the first location feature and the second location feature, and details are not described herein again.
  • the processing unit 912 is further configured to identify a target area according to the maximum distance, where the target area includes the first part and the second part.
  • the processing unit 912 can recognize the function of the target area according to the maximum distance, and can be implemented by the processor 11 in the image processing apparatus 120.
  • the processing unit 912 identifies the function of the target area according to the maximum distance.
  • the image processing device 120 described in step S540 shown in FIG. 5 may identify specific details of the target area according to the maximum distance, and details are not described herein again.
  • the processing unit 912 is further configured to perform feature analysis on the target area to identify a description attribute of an apparent feature of a character in the target image.
  • the processing unit 912 performs feature segmentation on the target area.
  • the function of identifying the description attribute of the apparent feature of the person in the target image may be implemented by the processor 11 in the image processing device 120.
  • the processing unit 912 performs a feature analysis on the target area to identify a description attribute of the apparent feature of the person in the target image.
  • the image processing device 120 described in step S550 shown in FIG. 5 may refer to the target area. Performing feature analysis to identify specific details of the description attribute of the apparent feature of the character in the target image is not described herein.
  • the maximum distance is less than a preset threshold.
  • the acquiring unit 911 is further configured to receive a location attribute that the apparent feature has, where the location attribute is used to indicate that the apparent feature is a local attribute.
  • the function of the acquisition unit 911 to receive the location attribute of the apparent feature may be implemented by the interface 14 in the image processing device 120.
  • the acquiring unit 911 is further configured to acquire, in a correspondence relationship between the apparent feature and the location attribute stored in advance, a location attribute that is included in an apparent feature of the character, where the location The attribute is used to indicate that the apparent feature is a local attribute.
  • the acquiring unit 911 acquires the function of the location attribute of the apparent feature of the character in the correspondence between the apparent feature and the location attribute stored in advance, and may pass through the processor in the image processing device 120. 11 achieved.
  • the acquiring unit 911 is further configured to receive information including a first location feature and a second location feature of the apparent feature, where the information is used to indicate the apparent feature First position feature and second position feature.
  • the function of the acquisition unit 911 receiving the information including the first location feature and the second location feature of the apparent feature may be implemented by the interface 14 in the image processing device 120.
  • the acquiring unit 911 is further configured to query a correspondence between the pre-stored appearance features and the first location feature and the second location feature, respectively, according to the appearance feature and the location
  • the correspondence relationship acquires the first location feature and the second location feature of the apparent feature.
  • the obtaining unit 911 queries the correspondence between the apparent features stored in advance and the first location feature and the second location feature, and obtains the appearance feature according to the appearance feature and the correspondence relationship.
  • the functions of the first position feature and the second position feature may be implemented by the processor 11 in the image processing device 120.
  • the processing unit 912 is further configured to move the target area to a specified direction centering on the target area to obtain one or more offset areas.
  • the processing unit 912 is centered on the target area, and offsets the target area in one or more directions by one or a plurality of offset units with one block map as an offset unit.
  • the function of obtaining the one or more offset regions can be implemented by the processor 11 in the image processing device 120.
  • the processing unit 912 is further configured to perform feature analysis on the offset area to identify other description attributes of the apparent features of the character.
  • the processing unit 912 performs feature analysis on the offset region, and the function of identifying other description attributes of the apparent feature of the character may be implemented by the processor 11 in the image processing device 120.
  • the processing unit 912 is further configured to determine, according to a preset algorithm, a target description attribute from the description attribute and the other description attribute, where the target description attribute is the closest target among the description attribute and the other description attribute The description attribute of the data.
  • the processing unit 912 determines the function of the target description attribute from the description attribute and the other description attributes according to a preset algorithm, which may be implemented by the processor 11 in the image processing apparatus 120.
  • the processing unit 912 is further configured to divide the target area into a plurality of block maps, the plurality of block maps having the same shape and the plurality of block maps The continuity is between.
  • the processing unit 912 divides the target area into functions of a plurality of block maps, and can pass an image.
  • the processor 11 in the processing device 120 is implemented.
  • the processing unit 912 is further configured to shift the target area in one or more offset directions by one or a plurality of offset units, centering on the target area, and using one block map as an offset unit.
  • the one or more offset regions are obtained, wherein each of the offset regions has the same size as the target region.
  • the processing unit 912 is centered on the target area, and offsets the target area in one or more directions by one or a plurality of offset units with one block map as an offset unit.
  • the function of obtaining the one or more offset regions can be implemented by the processor 11 in the image processing device 120.
  • the description attribute identifying device 910 of the apparent feature of the embodiment of the present invention may be implemented by an Application Specific Integrated Circuit (ASIC) or a Programmable Logic Device (PLD), and the PLD is implemented. It can be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), a Generic Array Logic (GAL), or any combination thereof.
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • GAL Generic Array Logic
  • the description attribute identifying means 910 of the apparent features may correspond to performing the methods described in the embodiments of the present invention, and the above-described and other operations of the respective elements in the description attribute identifying means 910 of the apparent features and/or
  • the functions are respectively implemented in order to implement the method in FIG. 5 and the corresponding processes related to the method in FIG. 5, and are not described herein for brevity.
  • FIG. 10 is a structural diagram of another description attribute identifying device of an apparent feature according to an embodiment of the present invention.
  • the description attribute identifying device 1010 of the apparent feature provided by the embodiment of the present invention is shown in FIG. It is implemented based on the description attribute identification method of the apparent features shown in FIG. 4, which includes a processor 1011 and a memory 1012.
  • the memory 1012 stores computer instructions, and the processor 1011 is coupled to the memory 1012.
  • the processor 1011 is configured to execute computer instructions in the memory 1012 to perform the following steps:
  • a target image is obtained, the target image including a character.
  • the processor 1011 obtains the details of the target image, and may refer to the description of step S310 shown in FIG. 3a, and details are not described herein again.
  • the apparent feature is used to indicate a type to which the characteristics of the appearance of the character belong
  • the description attribute is used to identify a characteristic of the appearance of the character
  • the apparent feature has a global attribute
  • the global attribute is used to identify
  • the processing method of the target image is global processing.
  • the processor 1011 performs feature analysis on the target image to identify the details of the description attribute of the apparent feature of the person in the target image. Referring to the description of step S420 shown in FIG. 4, This will not be repeated.
  • the target image is directly selected as the recognition region for feature analysis, and the segmentation feature analysis is not performed on the target image, which simplifies the image processing operation process and saves the description attribute. Identifying time reduces the workload of computer image processing.
  • the processor 1011 is further configured to receive a location attribute that the apparent feature has, where the location attribute is used to indicate that the apparent feature is a global attribute.
  • the processor 1011 is further configured to acquire, in a correspondence relationship between the apparent feature and the location attribute that is pre-stored, a location attribute that is included in an apparent feature of the character, where the location The attribute is used to indicate that the apparent feature is a global attribute.
  • processor 1011 is further configured to perform the following steps:
  • the other table The feature is used to represent the appearance of the character, the type of other characteristics associated with the characteristics of the apparent feature.
  • the processor 1011 obtains other apparent features associated with the apparent features, acquires description attributes of the other apparent features, and corrects the appearance by using the description attributes of the other apparent features.
  • the description attribute of the feature refer to the detailed description of the related steps after step S420 shown in FIG. 4, and details are not described herein again.
  • the description attribute of the apparent feature with the global attribute is modified by the description attribute associated with the apparent feature with the global attribute and the other characteristic features of the local attribute, and the description of the apparent feature having the global attribute is improved.
  • the accuracy of the attribute is improved.
  • the processor 1011 is further configured to perform related steps in the description attribute identification method based on the apparent features shown in FIG. 4, and specific implementation details may refer to the table based on FIG. The relevant steps in the description attribute description method of the feature are not described here.
  • the description attribute identifying apparatus 1010 of the apparent feature may correspond to the image processing apparatus 120 in the embodiment of the present invention, and may correspond to the method shown in FIG. 4 according to an embodiment of the present invention.
  • the above-mentioned and other operations and/or functions of the respective modules in the description attribute identification device 1010 of the corresponding features are respectively described in order to implement the respective processes related to the method shown in FIG. 4, and are not described herein again for brevity.
  • FIG. 11 is a structural diagram of another description attribute identifying device of an apparent feature according to an embodiment of the present invention.
  • the description attribute identifying device 1110 of the apparent feature provided by the embodiment of the present invention is shown in FIG.
  • the acquisition unit 1111 and the processing unit 1112 are included, and the processing unit 1112 is connected to the acquisition unit 1111.
  • the function of each module in the description attribute identifying means 1110 will be described in detail below:
  • the obtaining unit 1111 is configured to acquire a target image, where the target image includes a character.
  • the function of the acquisition unit 1111 to acquire the target image can be implemented by the interface 14 of the image processing device 120.
  • the obtaining unit 1111 can refer to the specific implementation details of step S310 shown in FIG. 3a, and details are not described herein again.
  • a processing unit 1112 configured to perform feature analysis on the target image, and identify a description attribute of an apparent feature of the character in the target image; the apparent feature is used to indicate a type to which the characteristic of the appearance of the character belongs
  • the description attribute is used to identify a characteristic of the appearance of the character, and the appearance attribute has a global attribute, and the global attribute is used to identify that the processing manner of the target image is global processing.
  • the processing unit 1112 performs a feature analysis on the target image to identify a description attribute of the apparent feature of the character in the target image. For details, refer to the specific implementation details of step S420 shown in FIG. 4, and details are not described herein again. .
  • the target image is directly selected as the recognition region for feature analysis, and the segmentation feature analysis is not performed on the target image, which simplifies the image processing operation process and saves the description attribute. Identifying time reduces the workload of computer image processing.
  • the acquiring unit 1111 is further configured to receive a location attribute that the apparent feature has, where the location attribute is used to indicate that the apparent feature is a global attribute.
  • the function of the acquisition unit 1111 receiving the location attribute of the apparent feature may be implemented by the interface 14 of the image processing device 120.
  • the acquiring unit 1111 is further configured to acquire a location attribute of the apparent feature of the character in a correspondence between the apparent feature and the location attribute that is stored in advance, where the location attribute is used.
  • the indication of the apparent feature is a global attribute.
  • the acquiring unit 1111 acquires a function of a location attribute of the apparent feature of the character in a correspondence relationship between the apparent feature and the location attribute stored in advance, and may pass through the image processing device 120.
  • the processor 11 is implemented.
  • the obtaining unit 1111 is further configured to acquire other appearance features associated with the apparent feature; the other appearance features are used to represent the appearance of the character, and the The type to which other characteristics associated with the characteristics of the apparent feature belong.
  • the function of the acquiring unit 1111 to acquire other apparent features associated with the apparent feature may be implemented by the interface 14 of the image processing device 120, or may be queried by the processor 11 of the image processing device 120. Corresponding relationship between the stored apparent features and other apparent features; obtaining other apparent feature implementations associated with the apparent features.
  • the obtaining unit 1111 is further configured to acquire a description attribute of the other apparent features.
  • the function of the acquisition unit 1111 to acquire the description attribute of the other apparent features may be implemented by the processor 11 of the image processing apparatus.
  • the processing unit 1112 is further configured to modify a description attribute of the apparent feature by using a description attribute of the other apparent feature.
  • the description attribute of the apparent feature with the global attribute is modified by the description attribute associated with the apparent feature with the global attribute and the other characteristic features of the local attribute, and the description of the apparent feature having the global attribute is improved.
  • the accuracy of the attribute may be implemented by the processor 11 of the image processing device.
  • the processing unit 1112 is further configured to implement a function implemented by a related step in the description attribute identification method based on the apparent feature shown in FIG. 4, and specific implementation details may be referred to according to FIG. The relevant steps in the description attribute identification method of the apparent feature are not described here.
  • the apparent feature is used to indicate a type to which the characteristic of the appearance of the character belongs, and the apparent feature has a local attribute, and the local attribute And a method for instructing the image processing device to process the target image as a local process, and determining a location feature of the apparent feature of the target image to determine that the part of the character represented by the apparent feature is in advance
  • the position feature of the apparent feature is used to indicate a position of the part of the character embodied in the apparent feature in the preset character model, to identify the target according to the position feature a region, the target region including a portion of the character; and then performing feature analysis on the target region to identify a description attribute of the apparent feature of the character.
  • the target areas in which the parts of the characters represented by the apparent features in the target image are selected are selected as the identification of the feature analysis.
  • the area reduces the meaningless recognition area, simplifies the image processing operation process, saves the recognition time of the description attribute, and reduces the workload of computer image processing.
  • the description attribute identifying device 1110 of the apparent feature of the embodiment of the present invention may be implemented by an Application Specific Integrated Circuit (ASIC) or a Programmable Logic Device (PLD), and the PLD is implemented. It can be a Complex Programmable Logic Device (CPLD), a Field-Programmable Gate Array (FPGA), a Generic Array Logic (GAL), or any combination thereof.
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • GAL Generic Array Logic
  • the description attribute identifying means 1110 may correspond to performing the method described in the embodiment of the present invention, and the above-described and other operations of the respective elements in the description attribute identifying means 1110 of the apparent feature and/or
  • the functions are respectively implemented in order to implement the method in FIG. 4 and the corresponding processes related to the method in FIG. 4, and are not described herein for brevity.
  • the computer program product includes one or more computer instructions.
  • the processes or functions described in accordance with embodiments of the present invention are generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be passed from a website site, computer, server or data center Wired (eg, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains one or more sets of available media.
  • the usable medium can be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium.
  • the semiconductor medium can be a Solid State Disk (SSD).
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

一种表观特征的描述属性识别方法及装置,通过获取目标图像的表观特征的位置特征,确定表观特征所体现的人物的部位在预设人物模型中的位置。表观特征的位置特征用于表示表观特征所体现的人物的部位在预设人物模型中的位置,以根据位置特征,识别目标区域,目标区域包括人物的部位;然后对目标区域进行特征分析,识别人物的表观特征的描述属性。通过确定具有局部属性的表观特征的位置特征,对于具有局部属性的表观特征,针对性选取目标图像中的表观特征所体现的人物的部位所在的目标区域,作为特征分析的识别区域,减少无意义的识别区域,简化图像处理操作过程,节约描述属性的识别时间,降低计算机图像处理的工作负荷。

Description

一种表观特征的描述属性识别方法及装置 技术领域
本发明涉及图像处理技术领域,尤其涉及一种表观特征的描述属性识别方法及装置。
背景技术
行人属性识别是一种模式识别技术,用于对监控类视频中的行人的表观特征的描述属性进行识别。行人的表观特征包括性别、年龄、身材、衣着、头发、配饰、朝向等多方面,每一表观特征包括若干种描述属性,例如当表观特征为性别时,性别的描述属性包括男和女。又如当表观特征为头发时,头发的描述属性包括长头发和短头发,头发的表观特征还可能包括其他的描述属性,比如说按照颜色区分头发的描述属性,头发的描述属性包括白色、黑色和棕色等。行人属性识别的对象是摄像机随机拍摄到的任一角度的目标图像,目的是通过对行人的表观特征的描述属性识别,降低视觉信息的搜索难度,提高视觉信息识别的精度和速度。
现有技术中,对目标图像中的人物的表观特征进行描述属性识别时,对目标图像进行特征分析和分类训练。对于不同的表观特征的描述属性识别,例如识别头发的长、短描述属性时,也是对整个目标图像进行特征分析,包含无意义的图像处理操作,增加了计算机图像处理的工作负荷。
发明内容
本发明公开了一种表观特征的描述属性识别方法及装置,以实现根据不同的表观特征,针对性地选取目标图像中的与表观特征的描述属性相关的识别区域,以减少无意义的图像处理操作,降低计算机图像处理的工作负荷。
第一方面提供一种表观特征的描述属性识别方法,所述方法由图像处理设备执行,包括:获取目标图像,所述目标图像包括人物,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理,通过获取所述目标图像的表观特征的位置特征,确定所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,以根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位;然后对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。
通过确定具有局部属性的表观特征的位置特征,对于具有局部属性的表观特征,针对性地选取目标图像中的,表观特征所体现的人物的部位所在的目标区域,作为特征分析的识别区域,减少无意义的识别区域,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
基于第一方面,在第一种实现方式中,还包括:接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
基于第一方面,在第一种实现方式中,还包括:在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
结合第一方面或者第一方面的第一种或者第二种实现方式,在第三种实现方式中, 还包括:以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。将目标区域的位置向左、右或者上、下或者其他方向进行位置的调整,对于通过不清晰的目标图像,获得的未包括所述人物的部位或者只包括人物的部位中的一部分的目标区域,通过偏移目标区域后,可以获得包括人物的部位的偏移区域,降低了由于不清晰的目标图像获得的不全的人物部位的目标区域,而造成描述属性的识别误差较大的风险,提高了描述属性识别的准确率。
结合第一方面或者第一方面的第三种实现方式,在第四种实现方式中,还包括:以所述偏移区域或者目标区域为中心,将所述偏移区域或者目标区域向外延伸以获得一个或多个候选区域;对所述候选区域进行特征分析,识别所述人物的表观特征的其他描述属性;按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。将目标区域或者偏移区域向周围延伸,进行位置的调整,对于通过不清晰的目标图像,获得的未包括所述人物的部位或者只包括人物的部位中的一部分的目标区域,通过延伸目标区域或者偏移区域后,可以获得包括人物的部位的候选区域,降低了由于不清晰的目标图像获得的不全的人物部位的目标区域或者偏移区域,而造成描述属性的识别误差较大的风险,提高了描述属性识别的准确率。
结合第一方面的第三种或第四种实现方式,在第五种实现方式中,以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,包括:将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
结合第一方面或者第一方面的第一种至第五种任一的实现方式,在第六种实现方式中,所述获取所述表观特征的位置特征,包括:接收包括所述表观特征的位置特征的信息,所述信息用于指示所述表观特征的位置特征。
结合第一方面或者第一方面的第一种至第五种任一的实现方式,在第七种实现方式中,,所述获取所述表观特征的位置特征,包括:查询预先存储的所述表观特征与位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的位置特征。
第二方面,提供一种表观特征的描述属性识别方法,所述方法由图像处理设备执行,包括:获取目标图像,所述目标图像包括人物;所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;通过获取所述人物的表观特征的第一位置特征和第二位置特征,确定所述表观特征所体现的所述人物的第一部位和第二部位在预设人物模型中的位置,所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,以根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离;根据所述最大距离识别 目标区域,所述目标区域包括所述第一部位和所述第二部位;然后对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。
通过确定具有局部属性的表观特征的多个位置特征,对于具有局部属性的表观特征,针对性地选取目标图像中的,表观特征所体现的人物的多个部位所在的目标区域,作为特征分析的识别区域,减少无意义的识别区域,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
基于第二方面,在第一种实现方式中,所述最大距离小于预设阈值。通过所述预设阈值指示图像处理设备对所述目标图像进行局部处理还是全局处理。如果所述最大距离小于预设阈值,则指示图像处理设备对所述目标图像进行局部处理,如果所述最大距离大于或等于预设阈值,则指示图像处理设备对所述目标图像进行全局处理,全局处理即对所述目标图像进行处理。
结合第二方面或者第二方面的第一种实现方式,在第二种实现方式中,还包括:接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
结合第二方面或者第二方面的第一种实现方式,在第三种实现方式中,还包括:在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
结合第二方面或者第二方面的第一种至第三种任一的实现方式,在第四种实现方式中,获取所述人物的表观特征的第一位置特征和第二位置特征,包括:接收包括所述表观特征的第一位置特征和第二位置特征的信息,所述信息用于指示所述表观特征的第一位置特征和第二位置特征。
结合第二方面或者第二方面的第一种至第三种任一的实现方式,在第五种实现方式中,获取所述人物的表观特征的第一位置特征和第二位置特征,包括:查询预先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征。
结合第二方面或者第二方面的第一种至第五种任一的实现方式,在第六种实现方式中,还包括:以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。将目标区域的位置向左、右或者上、下或者其他方向进行位置的调整,对于通过不清晰的目标图像,获得的未包括所述人物的部位或者只包括人物的部位中的一部分的目标区域,通过偏移目标区域后,可以获得包括人物的部位的偏移区域,降低了由于不清晰的目标图像获得的不全的人物部位的目标区域,而造成描述属性的识别误差较大的风险,提高了描述属性识别的准确率。
结合第二方面的第六种实现方式,在第七种实现方式中,还包括:以所述偏移区域或者目标区域为中心,将所述偏移区域或者目标区域向外延伸以获得一个或多个候选区域;对所述候选区域进行特征分析,识别所述人物的表观特征的其他描述属性;按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。将目标区域或者偏移区域向周围延伸,进行位置的调整,对于通过不清晰的目标图像,获得的未包括所述人 物的部位或者只包括人物的部位中的一部分的目标区域,通过延伸目标区域或者偏移区域后,可以获得包括人物的部位的候选区域,降低了由于不清晰的目标图像获得的不全的人物部位的目标区域或者偏移区域,而造成描述属性的识别误差较大的风险,提高了描述属性识别的准确率。
结合第二方面的第六种或第七种实现方式,在第八种实现方式中,以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,包括:将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
第三方面,提供一种表观特征的描述属性识别方法,所述方法由所述图像处理设备执行,包括以下步骤:获取目标图像,所述目标图像包括人物;对所述目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性;所述表观特征用于表示所述人物的外表的特性所属的类型,所述描述属性用于标识所述人物的外表的特性,所述表观特征具有全局属性,所述全局属性用于识别对所述目标图像的处理方式为全局处理。
通过确定表观特征具有全局属性,对于具有全局属性的表观特征,直接选取目标图像作为特征分析的识别区域,不用对目标图像进行分块特征分析,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
结合第三方面,在第一种实现方式中,还包括:接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。
结合第三方面,在第二种实现方式中,还包括:在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。
结合第三方面或者第三方面的第一种或第二种实现方式,在第三种实现方式中,还包括:获取与所述表观特征相关联的其他表观特征;所述其他表观特征用于表示所述人物的外表的,与所述表观特征的特性相关联的其它特性所属的类型;获取所述其他表观特征的描述属性;通过所述其他表观特征的描述属性修正所述表观特征的描述属性。通过与具有全局属性的表观特征相关联的,且具有局部属性的其他表观特征的描述属性,修正具有全局属性的表观特征的描述属性,提高了识别具有全局属性的表观特征的描述属性的准确率。
结合第三方面的第三种实现方式,在第四种实现方式中,获取与所述表观特征相关联的其他表观特征,包括:查询预先存储的所述表观特征与其他表观特征的对应关系;获取与所述表观特征相关联的其他表观特征。
结合第三方面的第三种实现方式,在第五种实现方式中,获取与所述表观特征相关联的其他表观特征,包括:接收包含所述其他表观特征的标识的信息;获取与所述表观特征相关联的其他表观特征。
结合第三方面的第三种至第五种实现方式中的任一实现方式,在第六种实现方式中,所述获取所述其他表观特征的描述属性,包括:获取所述其他表观特征的位置特征,所述其他表观特征用于表示所述人物的外表的其他特性所属的类型,所述其他表观特征的 位置特征,用于表示所述其他表观特征所体现的所述人物的部位在预设人物模型中的位置,所述其他表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;根据所述其他表观特征的位置特征,识别目标区域,所述目标区域包括所述人物的部位;对所述目标区域进行特征分析,识别所述人物的其他表观特征的描述属性。
结合第三方面的第六种实现方式,在第七种实现方式中,接收所述表观特征具有的位置属性,所述位置属性用于指示所述其他表观特征为局部属性。
结合第三方面的第六种实现方式,在第八种实现方式中,在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的其他表观特征具有的位置属性,所述位置属性用于指示所述其他表观特征为局部属性。
结合第三方面的第六种至第八种实现方式中的任一实现方式,在第九种实现方式中,还包括:以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;按照预设算法从所述其他表观特征的描述属性和所述其他描述属性中确定目标描述属性,作为所述其他表观特征的描述属性,所述目标描述属性是所述其他表观特征的描述属性和所述其他描述属性中最接近目标数据的描述属性。
结合第三方面或者第三方面的第九种实现方式,在第十种实现方式中,还包括:以所述偏移区域或者目标区域为中心,将所述偏移区域或者目标区域向外延伸以获得一个或多个候选区域;对所述候选区域进行特征分析,识别所述人物的其他表观特征的其他描述属性;按照预设算法从所述其他表观特征的描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述其他表观特征的描述属性和所述其他描述属性中最接近目标数据的描述属性,作为所述其他表观特征的描述属性。将目标区域或者偏移区域向周围延伸,进行位置的调整,对于通过不清晰的目标图像,获得的未包括所述人物的部位或者只包括人物的部位中的一部分的目标区域,通过延伸目标区域或者偏移区域后,可以获得包括人物的部位的候选区域,降低了由于不清晰的目标图像获得的不全的人物部位的目标区域或者偏移区域,而造成其他表观特征的描述属性的识别误差较大的风险,提高了其他表观特征的描述属性识别的准确率。
结合第三方面的第九种或第十种实现方式,在第十一种实现方式中,以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,包括:将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
结合第三方面的第六种至第十一种的任一实现方式,在第十二种实现方式中,所述获取所述其他表观特征的位置特征,包括:接收包括所述其他表观特征的位置特征的信息,所述信息用于指示所述其他表观特征的位置特征。
结合第三方面的第六种至第十一种的任一实现方式,在第十三种实现方式中,所述获取所述其他表观特征的位置特征,包括:查询预先存储的所述其他表观特征与位置特征的对应关系;根据所述其他表观特征以及所述对应关系获取所述其他表观特征的位置特征。
第四方面提供一种表观特征的描述属性识别装置,包括用于执行第一方面或第一方面的任一种可能实现方式中的表观特征的描述属性识别方法的各个模块,所述模块可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第五方面提供一种表观特征的描述属性识别装置,包括处理器、存储器,所述存储器存储计算机指令,所述处理器和所述存储器连接;所述处理器用于执行所述存储器中的计算机指令,以执行第一方面或第一方面的任意可能的实现方式中的方法。
第六方面提供一种表观特征的描述属性识别装置,包括用于执行第二方面或第二方面的任一种可能实现方式中的表观特征的描述属性识别方法的各个模块,所述模块可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第七方面提供一种表观特征的描述属性识别装置,包括处理器、存储器,所述存储器存储计算机指令,所述处理器和所述存储器连接;所述处理器用于执行所述存储器中的计算机指令,以执行执行第二方面或第二方面的任意可能的实现方式中的方法。
第八方面提供一种表观特征的描述属性识别装置,包括用于执行第三方面或第三方面的任一种可能实现方式中的表观特征的描述属性识别方法的各个模块,所述模块可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第九方面提供一种表观特征的描述属性识别装置,包括处理器、存储器,所述存储器存储计算机指令,所述处理器和所述存储器连接;所述处理器用于执行所述存储器中的计算机指令,以执行第三方面或第三方面的任意可能的实现方式中的方法。
本发明实施例提供的表观特征的描述属性识别方法及装置中,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理,通过获取所述目标图像的表观特征的位置特征,确定所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,以根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位;然后对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。通过确定具有局部属性的表观特征的位置特征,对于具有局部属性的表观特征,针对性地选取目标图像中的,表观特征所体现的人物的部位所在的目标区域,作为特征分析的识别区域,减少无意义的识别区域,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍:
图1为本发明实施例提供的图像处理系统组成图;
图2为本发明实施例提供的图像处理设备120的结构图;
图3a为本发明实施例提供的一种表观特征的描述属性识别方法的流程图;
图3b为本发明实施例提供的确定目标区域的示意图;
图3c为本发明实施例提供的目标区域移动后的示意图;
图3d为本发明实施例提供的分块图的示意图;
图4为本发明实施例提供的另一种表观特征的描述属性识别方法的流程图;
图5为本发明实施例提供的另一种表观特征的描述属性识别方法的流程图;
图6为本发明实施例提供的一种表观特征的描述属性识别装置的结构图;
图7为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图;
图8为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图;
图9为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图;
图10为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图;
图11为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图。
具体实施方式
下面将结合附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。
本实施例提供的图像处理设备、装置和方法主要应用于行人监控领域,用于识别监控视频中的行人,其表观特征的描述属性,所述表观特征用于表示监控中的人物的外表的特性所属的类型,例如,表观特征可以是人物的头发的颜色,头发的长短,行人的皮肤颜色,身高、性别、所穿的衣服类型、携带的包裹类型。或者在车辆监控领域的应用中,表观特征可以是车辆的颜色、车牌、车辆的驾驶人员的面貌特征或者穿衣款式或者性别等。所述描述属性用于标识所述人物的外表的特性,例如,表观特征为行人的皮肤颜色,皮肤颜色包括多种描述属性,分别为黄色皮肤或者黑色皮肤或者白色皮肤。
本实施例提供的表观特征的描述属性识别方法及装置应用于图像处理系统中,请参见图1,图1为本发明实施例提供的图像处理系统组成图,如图1所示,图像处理系统包括初始图像拍摄装置110和图像处理设备120,初始图像拍摄装置110和图像处理设备120通信连接,初始图像拍摄装置110用于监控,并获取不同拍摄角度的初始图像,所述初始图像包括人物。初始图像拍摄装置110发送初始图像至图像处理设备120。图像处理设备120接收到所述初始图像后,识别初始图像中的人物的轮廓,获得所述轮廓围成的轮廓图像。所述轮廓图像包括所述人物。
图像处理设备120获得轮廓图像后,获取包括轮廓图像中的所述人物的目标图像。获取包括轮廓图像中的所述人物的目标图像的一种方式为,图像处理设备120根据预设人物模型中的人物的角度,在一个平面内旋转所述轮廓图像的角度,旋转角度后的所述轮廓图像,其中的人物与预设人物模型中的人物的角度相同,轮廓图像中的人物与预设人物模型中的人物属于同一人物类型。例如,初始图像中的人物为行人,则预设人物模型是预设人体模型,其为直立的人体模型。又如,初始图像中的人物为车辆,则预设人物模型是预设车辆模型。获取包括轮廓图像中的所述人物的目标图像的另一种方式为,图像处理设备120接收用户输入的目标图像,所述目标图像为用户通过终端设备对所述轮廓图像旋转后获得的目标图像,所述终端设备可以是本实施例提供的图像处理设备120。图像处理设备可以作为终端设备,用于接收用户的输入指令,旋转所述轮廓图像获得目标图像。
图像处理设备120在获得目标图像后,识别表观特征具有的位置属性为全局属性还是局部属性,所述局部属性用于指示图像处理设备120对所述目标图像的处理方式为局部处理,所述全局属性用于指示图像处理设备120对所述目标图像的处理方式为全局处理。
如果表观特征具有的所述位置属性为局部属性,图像处理设备120识别出与所述表观特征的位置特征匹配的目标区域,对所述目标区域进行特征分析,识别出所述人物的 表观特征的描述属性。所述目标区域包括所述人物的部位。
如果表观特征具有的位置属性为全局属性,图像处理设备120对目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性。
本实施例提供的图像拍摄装置110是具备摄像或者照相等获取图像功能的台式计算机、服务器、移动计算机、移动拍摄设备、手持终端设备或可穿戴的拍摄设备等。
本实施例提供的图像处理设备120是台式计算机、服务器、移动计算机或者手持终端设备等。
图2为本发明实施例提供的图像处理设备120的结构图。如图2所示,本实施例提供的图像处理设备120为计算机,包括处理器11、存储器12、接口14和通信总线13。处理器11与存储器12通过通信总线13进行通信。
接口14,用于与终端设备通信,通过接口14可以接收终端设备发送的数据122和信息。数据122包括目标图像。
存储器12用于存储程序121,程序121用于对人物的表观特征的描述属性进行识别,程序121包括图像处理功能的程序。图像处理功能是指经过图像处理模型,输出概率形式表现描述属性的识别结果的功能,图像处理模型可以是卷积神经网络等用于处理图像的数学模型。
存储器12还用于存储数据122,以及用于存储通过接口14发送的信息。
存储器12包括易失性存储器,非易失性存储器或其组合。易失性存储器例如为随机访问存储器(英文:random-access memory,RAM)。非易失性存储器例如软盘、硬盘、固态硬盘(solid state disk,SSD)、光盘等各种可以存储程序代码的机器可读介质。
处理器11,用于执行程序121,识别出所述表观特征的描述属性。处理器11用于识别表观特征具有的位置属性为全局属性还是局部属性,如果所述位置属性为局部属性,处理器11识别出与所述表观特征的位置特征匹配的目标区域,对所述目标区域进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性。所述位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置。所述目标区域包括所述表观特征所体现的所述人物的部位。如果为全局属性,处理器11对通过接口14接收到的目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性。
处理器11是图像处理设备120的主要设备之一,功能主要是解释计算机指令以及处理计算机软件中的数据。处理器11可能是一个中央处理器(英文:central processing unit,CPU),也可能是复杂可编程逻辑器件(英文:Complex Programmable Logic Device,CPLD)或者现场可编程门阵列(Field-Programmable Gate Array,FPGA)。
下面对本发明实施例公开的表观特征的描述属性识别方法进行具体的描述。表观特征的描述属性识别方法应用于图1和图2所示的图像处理设备120中,由图像处理设备120的处理器11执行。请参见图3a,图3a为本发明实施例提供的一种表观特征的描述属性识别方法的流程图。如图3a所示,本实施例主要描述了,应用于行人监控领域中,如果所述人物的表观特征具有的位置属性为局部属性,人物的表观特征的描述属性识别方法,具体包括如下步骤。
S310、获取目标图像,所述目标图像包括人物。本步骤中获得目标图像的细节,可参照上述基于图1所示的,图像处理系统中的图像处理设备120,在获得轮廓图像后,获取包括轮廓图像中的所述人物的目标图像的描述内容,在这里不再赘述。
S320、获取所述目标图像的表观特征的位置特征。所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理。所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置。
一种获取表观特征的位置特征的方式为,接收包括所述表观特征的位置特征的信息,获取所述表观特征的位置特征,所述信息用于指示所述表观特征的位置特征。另一种获取表观特征的位置特征的方式为,查询预先存储于所述图像处理设备120中的,所述表观特征与位置特征的对应关系,根据所述表观特征以及所述对应关系,获取到所述表观特征的位置特征。
本实施例中的预设人物模型为预设人体模型,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人体的部位在预设人体模型中的位置。下面通过两种方式确定所述人体的部位在预设人体模型中的位置。
一种方式是,确定预设人体模型中的所述人体的部位的轮廓与所述预设人体模型的轮廓的位置比例,可以通过获取预设人体模型中的所述人体的部位的轮廓与所述预设人体模型的轮廓的投影关系,确定所述位置比例,以确定所述人体的部位在预设人体模型中的位置,即确定所述表观特征的位置特征。
另一种方式是,确定包括所述人体的部位的几何区域,在包括所述预设人体模型的几何区域中的位置比例,以确定所述人体的部位在预设人体模型中的位置,即确定所述表观特征的位置特征。所述包括所述人体的部位的几何区域为,人体的部位在预设人体模型中的最高部位与最低部位之间的高度线条,与所述人体的部位在预设人体模型中的最左端部位与最右端部位之间的宽度线条,所围成的第一对称几何区域。所述包括预设人体模型的几何区域为,所述预设人体模型的最高部位与最低部位之间的高度线条,与所述预设人体模型的最左端部位和最右端部位之间的宽度线条,所围成的第二对称几何区域。在这种实施方式中,可以依据第一对称几何区域的高度与第二对称几何区域的高度的比例,或者依据第一对称几何区域的宽度与第二对称几何区域的宽度的比例,确定所述位置比例。或者,结合第一对称几何区域的高度与第二对称几何区域的高度的比例,和第一对称几何区域的宽度与第二对称几何区域的宽度的比例,共同确定所述位置比例。或者,依据第一对称几何区域在第二对称几何区域的投影关系,确定所述位置比例。
S330、根据所述位置特征,识别出目标区域。所述目标区域包括所述人物的部位。
如果依据预设人体模型中的所述人体的部位的轮廓与所述预设人体模型的轮廓的投影关系,确定所述表观特征的位置特征。本步骤S330中,图像处理器设备120将包括所述投影关系的轮廓进行缩放,缩放后的包括投影关系的轮廓与目标图像中的人物的轮廓的差距在一定误差范围内,然后,图像处理设备120依据所述投影关系,识别出包括所述人体的部位的目标区域。
下面举例说明依据第一对称几何区域的高度与第二对称几何区域的高度的比例,确定位置比例,如何识别目标区域的方法。
例如,表观特征为头发,且表观特征的描述属性包括短头发和长头发,通过确认第一对称几何区域在包括预设人体模型的第二对称几何区域的上部1/2区域的位置比例,获取表观特征为头发的位置特征。在获取到所述表观特征的位置特征后,根据第一对称几何区域在包括预设人体模型的第二对称几何区域的上部1/2区域的位置比例,将目标图 像划分成上、下两部分区域,选取目标图像的上部区域作为目标区域。请参见图3b,图3b为本发明实施例提供的确定目标区域的示意图。如图3b所示的右边图像,将目标图像划分成上、下两部分区域,选取目标图像的上部区域作为目标区域。
可以通过左上角坐标,以及右下角坐标的组合,表现第一对称几何区域在第二对称几何区域的位置比例,例如,通过[0,0],[w,h/2]两个坐标表示该上部1/2区域的位置比例。则图像处理设备120通过该左上角坐标,以及右下角坐标的组合确定目标区域的位置为左上角坐标,以及右下角坐标围成的目标图像的上1/2区域。如果依据高度比例确定位置比例,获取位置特征,则目标区域的宽度可以默认为目标图像的宽度w,h表示目标图像的垂直高度。
不仅可以通过左上角坐标,以及右下角坐标的组合表现位置比例,也可以通过其他形式表现位置比例,例如依据获取的“上1/2”,或者“头颈部”等形式参数,通过位置算法计算,或者查找预先存储的形式参数与左上角坐标,以及右下角坐标的组合的对应关系,获得左上角坐标,以及右下角坐标的组合,以确定目标区域的位置。
又如,表观特征为包裹,表观特征的描述属性包括双肩包、斜跨包和单肩包,通过确认第一对称几何区域在包括预设人体模型的第二对称几何区域的中部1/3区域的位置比例,获取所述表观特征为包裹时的位置特征,根据第一对称几何区域在包括预设人体模型的第二对称几何区域的中部1/3区域的位置比例,将目标图像划分成上、中、下三部分区域,选取目标图像的中部区域作为目标区域。例如,可以通过左上角坐标,右下角坐标的组合,即通过[0,h/3],[w,2h/3]这两个坐标,确定中部1/3区域的位置比例。如图3b所示的左边边图像,将目标图像划分成上、中、下三部分区域,选取目标图像的中部区域作为目标区域。
又如,表观特征为下半身着装,表观特征的描述属性包括长裤、短裤、短裙、长裙、裙裤等,通过确认第一对称几何区域在包括预设人体模型的第二对称几何区域的下部1/3区域,获取所述表观特征为下半身着装的位置特征,根据第一对称几何区域在包括预设人体模型的第二对称几何区域的下部1/3区域的位置比例,将目标图像划分成上、中、下三部分区域,选取目标图像的下部区域作为目标区域。
在其他实现方式中,可以通过多个坐标的组合确定第一对称几何区域在第二对称几何区域的位置比例。
如果结合第一对称几何区域的高度与第二对称几何区域的高度的比例,和第一对称几何区域的宽度与第二对称几何区域的宽度的比例,共同确定所述位置比例,可以通过左上角坐标,左下角坐标,右上角坐标,和右下角坐标共同确定所述位置比例,以确定目标区域的位置。例如,坐标组合包括左上角坐标[w/4,h/3],左下角坐标[w/3,2h/3],右上角坐标[3w/4,h/3],右下角坐标[2w/3,2h/3],通过该坐标组合确定的目标区域为目标图像中部的一个倒梯形区域。在其他实现方式中,可以通过左上角坐标,左下角坐标,右上角坐标,和右下角坐标中至少两个坐标的组合确定第一对称几何区域在第二对称几何区域的位置比例。
S340、对所述目标区域进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性。
采用图像处理技术对所述目标区域进行特征分析,获取概率形式表现的描述属性的识别结果,例如表观特征为头发长短的描述属性至少包括两个,分别是长头发描述属性 和短头发描述属性,则获得概率形式表现的描述属性的识别结果为两个概率的组合,即为(短头发描述属性0.9,长发短描述属性0.1),然后根据综合判定法则,识别出概率符合综合判定法则的描述属性,作为目标图像中的人物的表观特征的描述属性。综合判定法则可以是经过比较或者查询后选择接近目标数据的描述属性,例如目标数据的值为1,表示人物的表观特征的描述属性为短头发,选择概率最高为0.9的短头发描述属性0.9,作为目标图像中的人物的表观特征的描述属性。
基于图3a所示的实施例,在步骤S320之前,图像处理设备120获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。获取所述表观特征具有的位置属性的方法是,接收终端设备发送的,包含所述表观特征具有的位置属性的信息,以获取所述表观特征具有的位置属性。例如,当表观特征为头发时,头发的描述属性包括长头发和短头发,接收用户发送的表观特征为头发的位置属性为局部属性的信息,确认头发的位置属性为局部属性。又如,当表观特征为包裹类型时,包裹类型的描述属性包括背包或者手拿包,接收用户发送的表观特征为包裹类型的位置属性为局部属性的信息,确认包裹类型的位置属性为局部属性。又如,当表观特征为下半身着装时,下半身着装的描述属性包括裤子或者裙子,接收用户发送的表观特征为下半身着装的位置属性为局部属性的信息,确认下半身着装的位置属性为局部属性。
基于图3a所示的实施例,在步骤S320之前,图像处理设备120在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
基于图3a所示的实施例,在步骤S320之前,图像处理设备120获取所述人物的表观特征,所述表观特征的描述属性。一种方式为,图像处理设备120通过接口14接收用户文字输入的表观特征和所述表观特征的描述属性。另一种方式为,图像处理设备120接收用户在可视界面上点选输入的表观特征和所述表观特征的描述属性。另一种方式为,接收终端设备发送的包含表观特征和所述表观特征的描述属性的信息,以获取获取所述人物的表观特征,所述表观特征的描述属性。
基于图3a所示的实施例,在执行步骤S330之后,图像处理设备120以目标区域为中心,将所述目标区域向任意方向,或者多个不同的指定方向,或者在多个指定方向中选取的多个随机方向移动后,获得若干个偏移区域。请参见图3c,图3c为本发明实施例提供的目标区域移动后的示意图。如图3c所示,将实线框所围成的目标区域向左和向右移动后,分别获得两个偏移区域(图示中左边的虚线框所围成的偏移区域和右边的虚线框所围成的区域),其中向左移动后获得的左边的虚线框所围成的偏移区域包括整个头部,而目标区域和向右移动后获得的偏移区域只包括了部分的头部,因此对目标区域偏移后,可以获得更准确的识别区域。然后,如图3c所示,图像处理设备120采用图像处理技术对偏移区域进行特征分析,获得表观特征的其他描述属性,按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
如果描述属性和其他描述属性是以概率形式表示的,则目标数据的值可以为1,预设算法为识别出最高的概率表示的目标描述属性,作为目标图像中的人物的表观特征的描述属性。例如,可以是经过比较或者查询后选择概率最高的描述属性,作为目标图像中的人物的表观特征的描述属性。又如,对目标区域、每一个偏移区域匹配的同一描述属 性的概率进行求和运算获得的概率之和,然后对所述概率之和进行均值化处理或者支持向量机处理后,选择最高的概率表示的目标描述属性,作为目标图像中的人物的表观特征的描述属性。在其他实施方式中,目标数据可以是标准图像的标准参数,可以是对标准图像进行不同类型的特征分析后获得的标准参数。
在上述获得偏移区域的实现方式中,依据目标区域,偏移所述目标区域获得一个或多个偏移区域的具体实现方式为,将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的。例如,分块图的形状可以是横条状,也可以是竖条状,也可以是竖形网格或者横形网格等。请参见图3d,图3d为本发明实施例提供的分块图的示意图。如图3d所示,每个图像中的粗线围成的区域为目标区域,本发明实施例提供给的分块图可以是图3d左边显示的竖条状分块图,也可以是图3d中间显示的横条状分块图,也可以是图3d右边显示的竖形或者横行网格状分块图,本发明实施例不限定分块图的形状。然后,如图3d所示的三个图像,以目标区域为偏移中心,以一个所述分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移,以获得一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
在获得若干个偏移区域之后,按照一个或多个不同的预设尺寸将所述偏移区域或者目标区域向周围外延,获得包括所述偏移区域或者目标区域的候选区域。候选区域的尺寸要大于偏移区域的尺寸。获得候选区域后,采用图像处理技术对候选区域进行特征分析,获得表观特征的其他描述属性,按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。所述其他描述属性包括对候选区域进行特征分析后获得的与候选区域匹配的表观特征的描述属性,或者还包括对偏移区域进行特征分析后获得的与偏移区域匹配的表观特征的描述属性。
如果描述属性和其他描述属性是以概率形式表示的,则目标数据的值可以为1,预设算法为识别出最高的概率表示的目标描述属性,作为目标图像中的人物的表观特征的描述属性。例如,可以是经过比较或者查询后选择概率最高的描述属性,作为目标图像中的人物的表观特征的描述属性。例如,表观特征为头发的描述属性包括短头发和长头发,对目标区域、偏移区域、候选区域进行特征分析时,以目标数据的值为1作为短头发描述属性的标准,对一个目标区域和一个偏移区域以及两个候选区域分别进行特征分析,得到目标区域及偏移区域以及每一个候选区域对应的描述属性的概率依次为,[短头发描述属性0.7,长头发描述属性0.3],[短头发描述属性0.95,长头发描述属性0.05],[短头发描述属性0.6,长头发描述属性0.4],[短头发描述属性0.45,长头发描述属性0.55],则通过max操作,选择的结果为第一描述属性0.95,即识别头发的描述属性为短发描述属性。又如,对目标区域、每一个偏移区域和每个候选区域匹配的同一描述属性的概率进行求和运算获得的概率之和,然后对所述概率之和进行均值化处理或者支持向量机处理后,选择最高的概率表示的目标描述属性,作为目标图像中的人物的表观特征的描述属性。在其他实施方式中,目标数据可以是标准图像的标准参数,可以是对标准图像进行不同类型的特征分析后获得的标准参数。
在另一个实施例中,在获得若干个偏移区域或者若干个候选区域之后,可以对目标区域的人体的部位、每个偏移区域的人体的部位和每个候选区域中的人体的部位分别进 行轮廓识别。当识别到与预设人体模型的人体的部位的轮廓形状相同的目标区域或者偏移区域或者候选区域时,则保留该目标区域、偏移区域或者候选区域,并对与预设人体模型的人体的部位的轮廓形状相同的目标区域、偏移区域或者候选区域分别进行特征分析,获得与保留的目标区域匹配的描述属性,以及获得与每一个保留的偏移区域匹配的其他描述属性,以及获得与每一个保留的候选区域匹配的其他描述属性。获得上述与目标区域匹配的描述属性和多个与候选区域、偏移区域匹配的其他描述属性后,图像处理设备120按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。预设算法可以参考基于图3a所描述的,对偏移区域和候选区域进行特征分析后,获得的表观特征的其他描述属性中的,对描述属性和其他描述属性进行预设算法实现目标描述属性的细节,在这里不再赘述。
在获得若干个偏移区域之后,可以随机或者按照预定顺序,选择符合预设个数的偏移区域进行特征分析或者进行人体部位的轮廓识别,以及在获得若干个候选区域之后也可以随机或者按照预定顺序,选择符合预设个数的候选区域进行特征分析或者进行人体部位的轮廓识别,选择出的候选区域的预设个数以及偏移区域的预设个数可以相同也可以不同。例如,可以按照用户的需求只对目标区域进行特征分析,也可以按照用户的选择,只对目标区域和预设个数的偏移区域进行特征分析或者进行轮廓识别,也可以按照用户的选择,只对目标区域和预设个数的候选区域进行特征分析或者进行轮廓识别,也可以即对目标区域进行特征分析,也对预设个数的偏移区域和预设个数的候选区域进行特征分析。当偏移区域向周围外延至目标图像的边沿时,获得的候选区域为目标图像,即识别到偏移区域的宽度和高度与目标图像的高度和宽度相同时,获得的候选区域为目标图像。
下面描述本发明实施例提供的另一种表观特征的描述属性识别方法。请参见图4,图4为本发明实施例提供的另一种表观特征的描述属性识别方法的流程图。在本实施例中主要描述了,应用于行人监控领域中的,当所述人物的表观特征具有的位置属性为全局属性时,人物的表观特征的描述属性识别方法,如图4所示,具体包括如下步骤。
S410、获取目标图像,所述目标图像包括人物。本步骤中获取目标图像的具体实现方式,可参照基于图3a所示的实施例中的步骤S310的描述细节,在这不再赘述。
S420、对所述目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性。所述表观特征具有全局属性,所述全局属性用于指示所述图像处理设备对所述目标图像的处理方式为全局处理。采用图像处理技术对所述目标图像进行特征分析,获取概率形式表现的描述属性的识别结果,根据综合判定法则,识别出概率符合综合判定法则的描述属性,作为目标图像中的人物的表观特征的描述属性。例如,可以是经过比较或者查询后选择概率最高的描述属性,作为目标图像中的人物的表观特征的描述属性。
基于上述描述的所述表观特征具有的位置属性为全局属性时,表观特征的描述属性识别方法,在步骤S420之前,还包括:图像处理设备120获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。获取所述表观特征具有的位置属性的方法是,接收终端设备发送的,包含所述表观特征具有的位置属性的信息,以获取所述表观特征具有的位置属性。
基于上述描述的所述表观特征具有的位置属性为全局属性时,表观特征的描述属性识别方法,在步骤S420之前,还包括:图像处理设备120在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。
另外,在获取所述人物的表观特征具有的位置属性之前,图像处理设备120获取所述人物的表观特征,所述表观特征的描述属性。获取所述人物的表观特征,所述表观特征的描述属性的一种方式为,图像处理设备120通过接口14接收用户文字输入的表观特征和所述表观特征的描述属性。获取所述人物的表观特征,所述表观特征的描述属性的另一种方式为,图像处理设备120接收用户在可视界面上点选输入的表观特征和所述表观特征的描述属性。获取所述人物的表观特征,所述表观特征的描述属性的另一种方式为,接收终端设备发送的包含表观特征和所述表观特征的描述属性的信息,以获取获取所述人物的表观特征,所述表观特征的描述属性。
基于上述描述的所述表观特征具有全局属性时,表观特征的描述属性识别方法,在步骤S420之后,还包括:获取与所述表观特征相关联的其他表观特征;所述其他表观特征用于表示所述人物的外表的,与所述表观特征的特性相关联的其它特性所属的类型;获取所述其他表观特征的描述属性;通过所述其他表观特征的描述属性修正所述表观特征的描述属性。
本实施例中获取与所述表观特征相关联的其他表观特征的一种实现方式为,查询预先存储于所述图像处理设备中的,所述表观特征与其他表观特征的对应关系;获取与所述表观特征相关联的其他表观特征。获取与所述表观特征相关联的其他表观特征的另一种实现方式为,接收包含所述其他表观特征的标识的信息,获取与所述表观特征相关联的其他表观特征。
本实施例中获取所述其他表观特征的描述属性的实现方式,可以参考上述基于图3a所示的表观特征具有的位置属性为局部属性时,识别表观特征的描述属性的方法,在这里不再赘述。
本实施例中通过与所述表观特征相关联的所述其他表观特征的描述属性,修正所述表观特征的描述属性,其具体实现方式为,采用相关度加权算法,将表示所述表观特征与所述其他表观特征的关联关系的相关度作为相关度加权算法中的加权的权重,对所述表观特征的描述属性进行加权修正后获得的描述属性,作为目标图像中的人物的表观特征的描述属性。本实施方式中,与具有全局属性的表观特征相关联的所述其他表观特征具有局部属性。
基于上述描述的所述表观特征具有全局属性时,表观特征的描述属性识别方法,在步骤S420之后,还包括:将目标图像或者目标区域或者偏移区域或候选区域输入至数学模型后,通过所述数学模型输出的接近目标数据的描述属性,作为目标图像中的人物的表观特征的描述属性。其中,所述数学模型是采用训练数据集,对计算模型进行训练修正后,获得的数学模型。训练数据集包括其他图像和其他图像中的人物的其他表观特征的描述属性,所述其他图像包括目标图像中的目标区域或者偏移区域或候选区域或者除目标图像之外的包括其他人物的其他图像。所述其他图像中的人物的其他表观特征与所述目标图像的具有全局属性的表观特征相关联。在本实施方式中,所述其他图像中的人物的其他表观特征具有全局属性或者局部属性。其他图像包括的人物与目标图像中的人 物属于同一人物类型。本实施方式中,采用训练数据集对计算模型进行训练,获得其他图像中的人物的其他表观特征的描述属性与目标数据的差距,然后依据所述差距调整计算模型的计算参数,获得调整计算参数后的计算模型,通过调整后的计算模型识别其他图像的人物的表观特征的描述属性,对所述计算模型的训练修正截止于通过计算模型获得的其他图像的人物的表观特征的描述属性与目标数据的差距小于或等于目标误差。
下面描述本发明实施例提供的另一种表观特征的描述属性识别方法。请参见图5,图5为本发明实施例提供的另一种表观特征的描述属性识别方法的流程图。在本实施例中主要描述了,应用于行人监控领域中的,通过多个位置特征对人物的表观特征的描述属性进行识别的方法,如图5所示,具体包括如下步骤。
S510、获取目标图像,所述目标图像包括人物。本步骤中获取目标图像的具体实现方式,可参照基于图3a所示的实施例中的步骤S310的描述细节,在这不再赘述。
S520、获取所述人物的表观特征的第一位置特征和第二位置特征。也可以获取人物的表观特征的多个位置特征,每个位置特征用于表示所述表观特征所体现的所述人物的每个部位在预设人物模型中的位置,所述表观特征用于表示所述人物的外表的特性所属的类型。例如所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理。本步骤中获取所述人物的表观特征的位置特征的实现方式,可参照基于图3a所示的实施例方式中,步骤S320实现的细节,在这里不再赘述。
S530、根据所述第一位置特征和所述第二位置,获取所述第一部位和所述第二部位之间的最大距离。所述最大距离小于预设阈值。最大距离包括两个部位(第一部位和所述第二部位)之间的最大垂直高度和/或者两个部位(第一部位和所述第二部位)之间的最大宽度。所述预设阈值是保证分析最大距离的效率高于分析全部目标图像的效率的值。如果有多个位置特征,所述最大距离大于或者等于任意两个部位之间的最大垂直高度,或者大于或者等于任意两个部位之间的最大宽度。
S540、根据所述最大距离识别目标区域,所述目标区域包括所述第一部位和所述第二部位。如果最大距离为两个部位之间的最大垂直高度,则默认目标区域的宽度为目标图像的宽度,如果最大距离为两个部位之间的最大宽度,则默认目标区域的高度为目标图像的高度。如果最大距离包括两个部位之间的最大垂直高度和最大宽度,则目标区域的高度为两个部位之间的最大垂直高度,目标区域的宽度为两个部位之间的最大宽度。如果是多个位置特征,目标区域包括所述表观特征所体现的所述人物的每个部位。
S550、对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。
基于上述的通过多个位置特征,对人物的表观特征的描述属性进行识别的实施例,在步骤S520之前,还包括:图像处理设备120获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。本实施例中获取所述表观特征具有的位置属性的方式,可参考基于图3a所示的实施例中,获取所述表观特征具有的位置属性的具体实现细节,在这里不再赘述。
基于上述的通过多个位置特征,对人物的表观特征的描述属性进行识别的实施例, 在步骤S520之前,还包括:图像处理设备120获取所述人物的表观特征,所述表观特征的描述属性。本实施例中获取所述人物的表观特征,所述表观特征的描述属性的方式,可参考基于图3a所示的实施例中,获取所述人物的表观特征,所述表观特征的描述属性的具体实现细节,在这里不再赘述。
基于上述的通过多个位置特征,对人物的表观特征的描述属性进行识别的实施例,在步骤S540之后,还包括:获得偏移区域,以及对偏移区域进行特征分析后识别出其他描述属性,然后按照预设算法从对目标区域进行特征分析后识别的描述属性,以及对偏移区域进行特征分析后识别的其他描述属性中,确定目标描述属性,本实施方式中的获得偏移区域,根据偏移区域识别出其他描述属性,然后根据目标区域识别出的描述属性和根据偏移区域识别出的其他描述属性确定目标属性的具体实现细节,可参照基于图3a所示的实施例中的具体实现细节,在这里不再赘述。另外,根据偏移区域获得候选区域,根据候选区域获得其他描述属性,再根据描述属性和多个其他描述属性,确定目标描述属性的具体实现细节,可参照基于图3a所示的实施例中的具体实现细节,在这里不再赘述。
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
值得说明的是,对于上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
本领域的技术人员根据以上描述的内容,能够想到的其他合理的步骤组合,也属于本发明的保护范围内。其次,本领域技术人员也应该熟悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
上文中结合图3a至图5详细描述了根据本发明实施例所提供的表观特征的描述属性识别的方法,下面将结合图6至图11,描述根据本发明实施例所提供的表观特征的描述属性识别的装置。
请参见图6,图6为本发明实施例提供的一种表观特征的描述属性识别装置的结构图,如图6所示,本发明实施例提供的表观特征的描述属性识别装置610是基于图3a所示的表观特征的描述属性识别方法实现的,其包括处理器611和存储器612。所述存储器612存储计算机指令,所述处理器611和所述存储器612连接。所述处理器611用于执行所述存储器612中的计算机指令,以执行以下步骤:
获取目标图像,所述目标图像包括人物。在本实施方式中,处理器611获取目标图像的细节,可参考图3a所示的步骤S310的描述,在这不再赘述。
获取所述目标图像的表观特征的位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理。在本实施方式中,处理器611获取所述目标图像的表观特征的位置特征的细节,可参考图3a所示的步骤S320的描述,在这不再赘述。
根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位。本实施方式中处理器611根据所述位置特征,识别目标区域的细节,可参考图3a所示的步骤S330的描述,在这不再赘述。
对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。本实施方式中处理器611对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性的细节,可参考图3a所示的步骤S330的描述,在这不再赘述。
作为一种可选的实施方式,所述处理器611,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
作为一种可选的实施方式,所述处理器611,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
作为一种可选的实施方式,所述处理器611还用于执行以下步骤:
以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
本实施方式中处理器611以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性,按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性的细节可参考图3a所示的步骤S330之后的相关的实施方式的描述,在这不再赘述。
作为一种可选的实施方式,所述处理器611,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
作为一种可选的实施方式,所述处理器611,还用于接收包括所述表观特征的位置特征的信息,所述信息用于指示所述表观特征的位置特征。
作为一种可选的实施方式,所述处理器611,还用于查询预先存储的所述表观特征与位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的位置特征。
应理解,根据本发明实施例的表观特征的描述属性识别装置610可对应于本发明实施例中的图像处理设备120,并可以对应于执行根据本发明实施例的图3a所示的方法中的相应主体,并且表观特征的描述属性识别装置610中的各个模块的上述和其它操作和/或功能分别为了实现图3a所示的方法相关的各个流程,为了简洁,在此不再赘述。
请参见图7,图7为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图,如图7所示,本发明实施例提供的表观特征的描述属性识别装置710是基于图5所示的表观特征的描述属性识别方法实现的,其包括处理器711和存储器712。所述存储器712存储计算机指令,所述处理器711和所述存储器712连接。所述处理器711用于执行所述存储器712中的计算机指令,以执行以下步骤:
获取目标图像,所述目标图像包括人物。在本实施方式中,处理器711获取目标图像的细节可参考图3a所示的步骤S310的描述,在这不再赘述。
获取所述人物的表观特征的第一位置特征和第二位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理。在本实施方式中,处理器711获取所述目标图像的表观特征的位置特征的细节,可参考图3a所示的步骤S320的描述,在这不再赘述。
根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离。在本实施方式中,处理器711根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离的细节,可参考图5所示的步骤S530的描述,在这不再赘述。
根据所述最大距离识别目标区域,所述目标区域包括所述第一部位和所述第二部位。在本实施方式中,处理器711根据所述最大距离识别目标区域的细节,可参考图5所示的步骤S540的描述,在这不再赘述。
对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。在本实施方式中,处理器711对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性的细节,可参考图5所示的步骤S550的描述,在这不再赘述。
作为一种可选的实施方式,所述最大距离小于预设阈值。
作为一种可选的实施方式,所述处理器711,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
作为一种可选的实施方式,所述处理器711,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
作为一种可选的实施方式,所述处理器711,还用于接收包括所述表观特征的第一位置特征和第二位置特征的信息,所述信息用于指示所述表观特征的第一位置特征和第二位置特征。
作为一种可选的实施方式,所述处理器711,还用于查询预先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征。
作为一种可选的实施方式,所述处理器711还用于执行以下步骤:
以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
本实施方式中处理器711以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性,按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属 性的细节可参考图3a所示的步骤S330之后的相关的实施方式的描述,在这不再赘述。
作为一种可选的实施方式,所述处理器711,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
应理解,根据本发明实施例的表观特征的描述属性识别装置710可对应于本发明实施例中的图像处理设备120,并可以对应于执行根据本发明实施例的图5所示的方法中的相应主体,并且表观特征的描述属性识别装置710中的各个模块的上述和其它操作和/或功能分别为了实现图5所示的方法相关的各个流程,为了简洁,在此不再赘述。
请参见图8,图8为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图,如图8所示,本发明实施例提供的表观特征的描述属性识别装置810是基于图3a所示的表观特征的描述属性识别方法实现的,其包括获取单元811和处理单元812,所述处理单元812和所述获取单元811连接。下面详细介绍表观特征的描述属性识别装置810中的每个模块的功能:
所述获取单元811,用于获取目标图像,所述目标图像包括人物。在本实施方式中,获取单元811获取目标图像的功能,可以通过图像处理设备120中的接口14实现。获取单元811获取目标图像的功能,可以参考图3a所示的步骤S310描述的图像处理设备120获取目标图像的具体细节,在这里不再赘述。
所述获取单元811,还用于获取所述目标图像的表观特征的位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理。在本实施方式中,获取单元811获取所述目标图像的表观特征的位置特征的功能,可以通过图像处理设备120中的接口14实现。获取单元811获取所述目标图像的表观特征的位置特征的功能,可以参考图3a所示的步骤S320描述的图像处理设备120获取所述目标图像的表观特征的位置特征的具体细节,在这里不再赘述。
所述处理单元812,用于根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位。在本实施方式中,处理单元812根据所述位置特征,识别目标区域的功能,可以通过图像处理设备120中的处理器11实现。处理单元812根据所述位置特征,识别目标区域的功能,可以参考图3a所示的步骤S330描述的图像处理设备120根据所述位置特征,识别目标区域的具体细节,在这里不再赘述。
所述处理单元812,还用于对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。在本实施方式中,处理单元812对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性的功能,可以通过图像处理设备120中的处理器11实现。处理单元812对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性的功能,可以参考图3a所示的步骤S340描述的图像处理设备120对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性的具体细节,在这里不再赘述。
作为一种可选的实施方式,所述获取单元811,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。在本实施方式中,获取单元811 接收所述表观特征具有的位置属性的功能,可以通过图像处理设备120中的接口14实现。
作为一种可选的实施方式,所述获取单元811,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。在本实施方式中,获取单元811在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性的功能,可以通过图像处理设备120中的处理器11实现。
作为一种可选的实施方式,所述处理单元812,还用于以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域。在本实施方式中,处理单元812以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域的功能,可以通过图像处理设备120中的处理器11实现。
所述处理单元812,还用于对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性。在本实施方式中,处理单元812以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域的功能,可以通过图像处理设备120中的处理器11实现。
所述处理单元812,还用于按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。在本实施方式中,处理单元812按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性的功能,可以通过图像处理设备120中的处理器11实现。
作为一种可选的实施方式,所述处理单元812,还用于将所述目标区域划分为若干个分块图。在本实施方式中,处理单元812将所述目标区域划分为若干个分块图的功能,可以通过图像处理设备120中的处理器11实现。
所述处理单元812,还用于以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。在本实施方式中,处理单元812以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域的功能,可以通过图像处理设备120中的处理器11实现。
作为一种可选的实施方式,所述获取单元811,还用于接收包括所述表观特征的位置特征的信息,所述信息用于指示所述表观特征的位置特征。在本实施方式中,获取单元811接收包括所述表观特征的位置特征的信息的功能,可以通过图像处理设备120中的接口14实现。
作为一种可选的实施方式,所述获取单元811,还用于查询预先存储的所述表观特征与位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的位置特征。在本实施方式中,获取单元811查询预先存储的所述表观特征与位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的位置特征的功能,可以通过图像处理设备120中的处理器11实现。
应理解的是,本发明实施例的表观特征的描述属性识别装置810可以通过专用集成电路(Application Specific Integrated Circuit,ASIC)实现,或可编程逻辑器件(Programmable Logic Device,PLD)实现,上述PLD可以是复杂程序逻辑器件(Complex Programmable Logic Device,CPLD),现场可编程门阵列(Field-Programmable Gate Array,FPGA), 通用阵列逻辑(Generic Array Logic,GAL)或其任意组合。通过软件实现图3a所示的描述属性识别方法时,描述表观特征的描述属性识别装置810及其各个模块也可以为软件模块。
根据本发明实施例的表观特征的描述属性识别装置810可对应于执行本发明实施例中描述的方法,并且表观特征的描述属性识别装置810中的各个单元的上述和其它操作和/或功能分别为了实现图3a中的方法及和图3a中的方法相关的相应流程,为了简洁,在此不再赘述。
请参见图9,图9为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图,如图9所示,本发明实施例提供的表观特征的描述属性识别装置910是基于图5所示的表观特征的描述属性识别方法实现的,其包括获取单元911和处理单元912,所述处理单元912和所述获取单元911连接。下面详细介绍表观特征的描述属性识别装置910中的每个模块的功能:
所述获取单元911,用于获取目标图像,所述目标图像包括人物。在本实施方式中,获取单元911获取目标图像的功能,可以通过图像处理设备120中的接口14实现。获取单元911获取目标图像的功能,可以参考图3a所示的步骤S310描述的图像处理设备120获取目标图像的具体细节,在这里不再赘述。
所述获取单元911,还用于获取所述人物的表观特征的第一位置特征和第二位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理。在本实施方式中,获取单元911获取获取所述人物的表观特征的第一位置特征和第二位置特征的功能,可以通过图像处理设备120中的接口14实现。获取单元911获取所述人物的表观特征的第一位置特征和第二位置特征的功能,可以参考图5所示的步骤S520描述的图像处理设备120获取所述人物的表观特征的第一位置特征和第二位置特征的具体细节,在这里不再赘述。
所述处理单元912,用于根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离。在本实施方式中,处理单元912根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离的功能,可以通过图像处理设备120中的处理器11实现。处理单元912根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离的功能,可以参考图5所示的步骤S530描述的图像处理设备120根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离的具体细节,在这里不再赘述。
所述处理单元912,还用于根据所述最大距离识别目标区域,所述目标区域包括所述第一部位和所述第二部位。在本实施方式中,处理单元912根据所述最大距离识别目标区域的功能,可以通过图像处理设备120中的处理器11实现。处理单元912根据所述最大距离识别目标区域的功能,可以参考图5所示的步骤S540描述的图像处理设备120根据所述最大距离识别目标区域的具体细节,在这里不再赘述。
所述处理单元912,还用于对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。在本实施方式中,处理单元912对所述目标区域进行特征分 析,识别所述目标图像中的人物的表观特征的描述属性的功能,可以通过图像处理设备120中的处理器11实现。处理单元912对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性的功能,可以参考图5所示的步骤S550描述的图像处理设备120对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性的具体细节,在这里不再赘述。
作为一种可选的实施方式,所述最大距离小于预设阈值。
作为一种可选的实施方式,所述获取单元911,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。在本实施方式中,获取单元911接收所述表观特征具有的位置属性的功能,可以通过图像处理设备120中的接口14实现。
作为一种可选的实施方式,所述获取单元911,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。在本实施方式中,获取单元911在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性的功能,可以通过图像处理设备120中的处理器11实现。
作为一种可选的实施方式,所述获取单元911,还用于接收包括所述表观特征的第一位置特征和第二位置特征的信息,所述信息用于指示所述表观特征的第一位置特征和第二位置特征。在本实施方式中,获取单元911接收包括所述表观特征的第一位置特征和第二位置特征的信息的功能,可以通过图像处理设备120中的接口14实现。
作为一种可选的实施方式,所述获取单元911,还用于查询预先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系,根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征。在本实施方式中,获取单元911查询预先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系,根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征的功能,可以通过图像处理设备120中的处理器11实现。
作为一种可选的实施方式,所述处理单元912,还用于以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域。在本实施方式中,处理单元912以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域的功能,可以通过图像处理设备120中的处理器11实现。
所述处理单元912,还用于对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性。在本实施方式中,处理单元912对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性的功能,可以通过图像处理设备120中的处理器11实现。
所述处理单元912,还用于按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。在本实施方式中,处理单元912按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性的功能,可以通过图像处理设备120中的处理器11实现。
作为一种可选的实施方式,所述处理单元912,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的。在本实施方式中,处理单元912将所述目标区域划分为若干个分块图的功能,可以通过图像 处理设备120中的处理器11实现。
所述处理单元912,还用于以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。在本实施方式中,处理单元912以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域的功能,可以通过图像处理设备120中的处理器11实现。
应理解的是,本发明实施例的表观特征的描述属性识别装置910可以通过专用集成电路(Application Specific Integrated Circuit,ASIC)实现,或可编程逻辑器件(Programmable Logic Device,PLD)实现,上述PLD可以是复杂程序逻辑器件(Complex Programmable Logic Device,CPLD),现场可编程门阵列(Field-Programmable Gate Array,FPGA),通用阵列逻辑(Generic Array Logic,GAL)或其任意组合。通过软件实现图5所示的描述属性识别方法时,描述表观特征的描述属性识别装置910及其各个模块也可以为软件模块。
根据本发明实施例的表观特征的描述属性识别装置910可对应于执行本发明实施例中描述的方法,并且表观特征的描述属性识别装置910中的各个单元的上述和其它操作和/或功能分别为了实现图5中的方法及和图5中的方法相关的相应流程,为了简洁,在此不再赘述。
请参见图10,图10为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图,如图10所示,本发明实施例提供的表观特征的描述属性识别装置1010是基于图4所示的表观特征的描述属性识别方法实现的,其包括处理器1011和存储器1012。所述存储器1012存储计算机指令,所述处理器1011和所述存储器1012连接。所述处理器1011用于执行所述存储器1012中的计算机指令,以执行以下步骤:
获取目标图像,所述目标图像包括人物。在本实施方式中,处理器1011获取目标图像的细节,可参考图3a所示的步骤S310的描述,在这不再赘述。
对所述目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性。所述表观特征用于表示所述人物的外表的特性所属的类型,所述描述属性用于标识所述人物的外表的特性,所述表观特征具有全局属性,所述全局属性用于识别对所述目标图像的处理方式为全局处理。
在本实施方式中,处理器1011对所述目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性的细节,可参考图4所示的步骤S420的描述,在这不再赘述。
通过确定表观特征具有全局属性,对于具有全局属性的表观特征,直接选取目标图像作为特征分析的识别区域,不用对目标图像进行分块特征分析,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
作为一种可选的实施方式,所述处理器1011,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。
作为一种可选的实施方式,所述处理器1011,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。
作为一种可选的实施方式,所述处理器1011还用于执行以下步骤:
获取与所述表观特征相关联的其他表观特征,获取所述其他表观特征的描述属性,通过所述其他表观特征的描述属性修正所述表观特征的描述属性,所述其他表观特征用于表示所述人物的外表的,与所述表观特征的特性相关联的其它特性所属的类型。在本实施方式中,处理器1011获取与所述表观特征相关联的其他表观特征,获取所述其他表观特征的描述属性,通过所述其他表观特征的描述属性修正所述表观特征的描述属性的细节,可参考图4所示的步骤S420之后的相关的步骤的细节描述,在这不再赘述。
通过与具有全局属性的表观特征相关联的,且具有局部属性的其他表观特征的描述属性,修正具有全局属性的表观特征的描述属性,提高了识别具有全局属性的表观特征的描述属性的准确率。
作为一种可选的实施方式,所述处理器1011,还用于执行基于图4所示的表观特征的描述属性识别方法中的相关步骤,具体实现细节可参照基于图4所示的表观特征的描述属性识别方法中的相关步骤,在这里不再赘述。
应理解,根据本发明实施例的表观特征的描述属性识别装置1010可对应于本发明实施例中的图像处理设备120,并可以对应于执行根据本发明实施例的图4所示的方法中的相应主体,并且表观特征的描述属性识别装置1010中的各个模块的上述和其它操作和/或功能分别为了实现图4所示的方法相关的各个流程,为了简洁,在此不再赘述。
请参见图11,图11为本发明实施例提供的另一种表观特征的描述属性识别装置的结构图,如图11所示,本发明实施例提供的表观特征的描述属性识别装置1110包括获取单元1111和处理单元1112,所述处理单元1112和所述获取单元1111连接。下面详细介绍表观特征的描述属性识别装置1110中的每个模块的功能:
获取单元1111,用于获取目标图像,所述目标图像包括人物。获取单元1111获取目标图像的功能可以通过图像处理设备120的接口14实现。获取单元1111获取目标图像的功能,可以参考图3a所示的步骤S310的具体实现细节,在这不再赘述。
处理单元1112,用于对所述目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性;所述表观特征用于表示所述人物的外表的特性所属的类型,所述描述属性用于标识所述人物的外表的特性,所述表观特征具有全局属性,所述全局属性用于识别对所述目标图像的处理方式为全局处理。处理单元1112对所述目标图像进行特征分析,识别出所述目标图像中的人物的表观特征的描述属性的功能,可以参考图4所示的步骤S420的具体实现细节,在这不再赘述。
通过确定表观特征具有全局属性,对于具有全局属性的表观特征,直接选取目标图像作为特征分析的识别区域,不用对目标图像进行分块特征分析,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
作为一种可选的实施方式,获取单元1111,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。获取单元1111接收所述表观特征具有的位置属性的功能,可以通过图像处理设备120的接口14实现。
作为一种可选的实施方式,获取单元1111,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为全局属性。获取单元1111在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性的功能,可以通过图像处理设备120的 处理器11实现。
作为一种可选的实施方式,获取单元1111,还用于获取与所述表观特征相关联的其他表观特征;所述其他表观特征用于表示所述人物的外表的,与所述表观特征的特性相关联的其它特性所属的类型。在本实施方式中,获取单元1111获取与所述表观特征相关联的其他表观特征的功能,可以通过图像处理设备120的接口14实现,也可以通过图像处理设备120的处理器11查询预先存储的所述表观特征与其他表观特征的对应关系;获取与所述表观特征相关联的其他表观特征实现。
获取单元1111,还用于获取所述其他表观特征的描述属性。在本实施方式中,获取单元1111获取所述其他表观特征的描述属性的功能可以通过图像处理设备的处理器11实现。
处理单元1112,还用于通过所述其他表观特征的描述属性修正所述表观特征的描述属性。通过与具有全局属性的表观特征相关联的,且具有局部属性的其他表观特征的描述属性,修正具有全局属性的表观特征的描述属性,提高了识别具有全局属性的表观特征的描述属性的准确率。在本实施方式中,处理单元1112通过所述其他表观特征的描述属性修正所述表观特征的描述属性的功能,可以通过图像处理设备的处理器11实现。
作为一种可选的实施方式,所述处理单元1112,还用于实现基于图4所示的表观特征的描述属性识别方法中的相关步骤实现的功能,具体实现细节可参照基于图4所示的表观特征的描述属性识别方法中的相关步骤,在这里不再赘述。
本发明实施例提供的表观特征的描述属性识别方法及装置中,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理,通过获取所述目标图像的表观特征的位置特征,确定所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,以根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位;然后对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。通过确定具有局部属性的表观特征的位置特征,对于具有局部属性的表观特征,针对性地选取目标图像中的,表观特征所体现的人物的部位所在的目标区域,作为特征分析的识别区域,减少无意义的识别区域,简化图像处理操作过程,节约了描述属性的识别时间,降低了计算机图像处理的工作负荷。
应理解的是,本发明实施例的表观特征的描述属性识别装置1110可以通过专用集成电路(Application Specific Integrated Circuit,ASIC)实现,或可编程逻辑器件(Programmable Logic Device,PLD)实现,上述PLD可以是复杂程序逻辑器件(Complex Programmable Logic Device,CPLD),现场可编程门阵列(Field-Programmable Gate Array,FPGA),通用阵列逻辑(Generic Array Logic,GAL)或其任意组合。通过软件实现图5所示的描述属性识别方法时,描述表观特征的描述属性识别装置1110及其各个模块也可以为软件模块。
根据本发明实施例的表观特征的描述属性识别装置1110可对应于执行本发明实施例中描述的方法,并且表观特征的描述属性识别装置1110中的各个单元的上述和其它操作和/或功能分别为了实现图4中的方法及和图4中的方法相关的相应流程,为了简洁,在此不再赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载或执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘(Solid State Disk,SSD)。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员根据本发明申请提供的具有实施例方式,可想到变化或替换。

Claims (45)

  1. 一种表观特征的描述属性识别方法,其特征在于,所述方法由图像处理设备执行,包括以下步骤:
    获取目标图像,所述目标图像包括人物;
    获取所述目标图像的表观特征的位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;
    根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位;
    对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。
  2. 如权利要求1所述的方法,其特征在于,还包括:
    接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  3. 如权利要求1所述的方法,其特征在于,还包括:
    在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  4. 如权利要求1-3任一所述的方法,其特征在于,还包括:
    以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
    对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
    按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
  5. 如权利要求4所述的方法,其特征在于,以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,包括:
    将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;
    以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
  6. 如权利要求1-5任一所述的方法,其特征在于,所述获取所述表观特征的位置特征,包括:
    接收包括所述表观特征的位置特征的信息,所述信息用于指示所述表观特征的位置特征。
  7. 如权利要求1-5任一所述的方法,其特征在于,所述获取所述表观特征的位置特 征,包括:
    查询预先存储的所述表观特征与位置特征的对应关系;
    根据所述表观特征以及所述对应关系获取所述表观特征的位置特征。
  8. 一种表观特征的描述属性识别方法,其特征在于,所述方法由图像处理设备执行,包括以下步骤:
    获取目标图像,所述目标图像包括人物;
    获取所述人物的表观特征的第一位置特征和第二位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;
    根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离;
    根据所述最大距离识别目标区域,所述目标区域包括所述第一部位和所述第二部位;
    对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。
  9. 如权利要求8所述的方法,其特征在于,所述最大距离小于预设阈值。
  10. 如权利要求8或9所述的方法,其特征在于,还包括:
    接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  11. 如权利要求8或9所述的方法,其特征在于,还包括:
    在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  12. 如权利要求8-11任一所述的方法,其特征在于,获取所述人物的表观特征的第一位置特征和第二位置特征,包括:
    接收包括所述表观特征的第一位置特征和第二位置特征的信息,所述信息用于指示所述表观特征的第一位置特征和第二位置特征。
  13. 如权利要求8-11任一所述的方法,其特征在于,获取所述人物的表观特征的第一位置特征和第二位置特征,包括:
    查询预先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系;
    根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征。
  14. 如权利要求8-13任一所述的方法,其特征在于,还包括:
    以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
    对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
    按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
  15. 如权利要求14所述的方法,其特征在于,以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域,包括:
    将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;
    以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
  16. 一种表观特征的描述属性识别装置,其特征在于,包括处理器、存储器,所述存储器存储计算机指令,所述处理器和所述存储器连接;
    所述处理器用于执行所述存储器中的计算机指令,以执行以下步骤:
    获取目标图像,所述目标图像包括人物;
    获取所述目标图像的表观特征的位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;
    根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位;
    对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。
  17. 如权利要求16所述的装置,其特征在于,所述处理器,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  18. 如权利要求16所述的装置,其特征在于,所述处理器,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  19. 如权利要求16-18任一所述的装置,其特征在于,所述处理器还用于执行以下步骤:
    以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
    对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
    按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
  20. 如权利要求19所述的装置,其特征在于,
    所述处理器,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
  21. 如权利要求16-20任一所述的装置,其特征在于,所述处理器,还用于接收包括所述表观特征的位置特征的信息,所述信息用于指示所述表观特征的位置特征。
  22. 如权利要求16-20任一所述的装置,其特征在于,所述处理器,还用于查询预先存储的所述表观特征与位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的位置特征。
  23. 一种表观特征的描述属性识别装置,其特征在于,包括处理器、存储器,所述存储器存储计算机指令,所述处理器和所述存储器连接;
    所述处理器用于执行所述存储器中的计算机指令,以执行以下步骤:
    获取目标图像,所述目标图像包括人物;
    获取所述人物的表观特征的第一位置特征和第二位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;
    根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离;
    根据所述最大距离识别目标区域,所述目标区域包括所述第一部位和所述第二部位;
    对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。
  24. 如权利要求23所述的装置,其特征在于,所述最大距离小于预设阈值。
  25. 如权利要求23或24所述的装置,其特征在于,所述处理器,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  26. 如权利要求23或24所述的装置,其特征在于,所述处理器,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  27. 如权利要求23-26任一所述的装置,其特征在于,所述处理器,还用于接收包括所述表观特征的第一位置特征和第二位置特征的信息,所述信息用于指示所述表观特征的第一位置特征和第二位置特征。
  28. 如权利要求23-26任一所述的装置,其特征在于,所述处理器,还用于查询预 先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征。
  29. 如权利要求23-28任一所述的装置,其特征在于,所述处理器还用于执行以下步骤:
    以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
    对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
    按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
  30. 如权利要求29所述的装置,其特征在于,
    所述处理器,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
  31. 一种表观特征的描述属性识别装置,其特征在于,包括处理单元和获取单元,所述处理单元和所述获取单元连接;
    所述获取单元,用于获取目标图像,所述目标图像包括人物;
    所述获取单元,还用于获取所述目标图像的表观特征的位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述表观特征的位置特征,用于表示所述表观特征所体现的所述人物的部位在预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;
    所述处理单元,用于根据所述位置特征,识别目标区域,所述目标区域包括所述人物的部位;
    所述处理单元,还用于对所述目标区域进行特征分析,识别所述人物的表观特征的描述属性。
  32. 如权利要求31所述的装置,其特征在于,所述获取单元,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  33. 如权利要求31所述的装置,其特征在于,所述获取单元,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  34. 如权利要求31-33任一所述的装置,其特征在于,
    所述处理单元,还用于以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
    所述处理单元,还用于对所述偏移区域进行特征分析,识别所述人物的表观特征的 其他描述属性;
    所述处理单元,还用于按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
  35. 如权利要求34所述的装置,其特征在于,
    所述处理单元,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;
    所述处理单元,还用于以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
  36. 如权利要求31-35任一所述的装置,其特征在于,所述获取单元,还用于接收包括所述表观特征的位置特征的信息,所述信息用于指示所述表观特征的位置特征。
  37. 如权利要求31-35任一所述的装置,其特征在于,所述获取单元,还用于查询预先存储的所述表观特征与位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的位置特征。
  38. 一种表观特征的描述属性识别装置,其特征在于,包括处理单元和获取单元,所述处理单元和所述获取单元连接;
    所述获取单元,用于获取目标图像,所述目标图像包括人物;
    所述获取单元,还用于获取所述人物的表观特征的第一位置特征和第二位置特征,所述表观特征用于表示所述人物的外表的特性所属的类型,所述第一位置特征用于表示所述表观特征所体现的所述人物的第一部位在预设人物模型中的位置,所述第二位置特征用于表示所述表观特征所体现的所述人物的第二部位在所述预设人物模型中的位置,所述表观特征具有局部属性,所述局部属性用于指示所述图像处理设备对所述目标图像的处理方式为局部处理;
    所述处理单元,用于根据所述第一位置特征和所述第二位置特征,获取所述第一部位和所述第二部位之间的最大距离;
    所述处理单元,还用于根据所述最大距离识别目标区域,所述目标区域包括所述第一部位和所述第二部位;
    所述处理单元,还用于对所述目标区域进行特征分析,识别所述目标图像中的人物的表观特征的描述属性。
  39. 如权利要求38所述的装置,其特征在于,所述最大距离小于预设阈值。
  40. 如权利要求38或39所述的装置,其特征在于,所述获取单元,还用于接收所述表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  41. 如权利要求38或39所述的装置,其特征在于,所述获取单元,还用于在预先存储的所述表观特征与位置属性的对应关系中获取所述人物的表观特征具有的位置属性,所述位置属性用于指示所述表观特征为局部属性。
  42. 如权利要求38-41任一所述的装置,其特征在于,所述获取单元,还用于接收包括所述表观特征的第一位置特征和第二位置特征的信息,所述信息用于指示所述表观特征的第一位置特征和第二位置特征。
  43. 如权利要求38-41任一所述的装置,其特征在于,所述获取单元,还用于查询预先存储的所述表观特征分别与第一位置特征和第二位置特征的对应关系;根据所述表观特征以及所述对应关系获取所述表观特征的第一位置特征和第二位置特征。
  44. 如权利要求38-43任一所述的装置,其特征在于,
    所述处理单元,还用于以所述目标区域为中心,将所述目标区域向指定方向移动以获得一个或多个偏移区域;
    所述处理单元,还用于对所述偏移区域进行特征分析,识别所述人物的表观特征的其他描述属性;
    所述处理单元,还用于按照预设算法从所述描述属性和所述其他描述属性中确定目标描述属性,所述目标描述属性是所述描述属性和所述其他描述属性中最接近目标数据的描述属性。
  45. 如权利要求44所述的装置,其特征在于,
    所述处理单元,还用于将所述目标区域划分为若干个分块图,所述若干个分块图具有相同的形状并且所述若干个分块图之间是连续的;
    所述处理单元,还用于以所述目标区域为中心,并且以一个分块图为偏移单位,将所述目标区域按照一个或倍数个偏移单位向一个或多个方向偏移以获得所述一个或多个偏移区域,其中每个偏移区域的尺寸与所述目标区域的尺寸相同。
PCT/CN2017/077366 2017-03-20 2017-03-20 一种表观特征的描述属性识别方法及装置 WO2018170695A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
BR112019019517A BR112019019517A8 (pt) 2017-03-20 2017-03-20 Método e aparelho para reconhecer atributo descritivo de característica de aparência
EP17902197.7A EP3591580A4 (en) 2017-03-20 2017-03-20 METHOD AND DEVICE FOR RECOGNIZING DESCRIPTIVE CHARACTERISTICS OF A APPEARANCE
JP2019551650A JP6936866B2 (ja) 2017-03-20 2017-03-20 外観特徴の記述属性を認識する方法及び装置
CN201780088761.9A CN110678878B (zh) 2017-03-20 2017-03-20 一种表观特征的描述属性识别方法及装置
PCT/CN2017/077366 WO2018170695A1 (zh) 2017-03-20 2017-03-20 一种表观特征的描述属性识别方法及装置
KR1020197030463A KR102331651B1 (ko) 2017-03-20 2017-03-20 겉보기 특징의 기술 속성을 인식하기 위한 방법 및 장치
US16/577,470 US11410411B2 (en) 2017-03-20 2019-09-20 Method and apparatus for recognizing descriptive attribute of appearance feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/077366 WO2018170695A1 (zh) 2017-03-20 2017-03-20 一种表观特征的描述属性识别方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/577,470 Continuation US11410411B2 (en) 2017-03-20 2019-09-20 Method and apparatus for recognizing descriptive attribute of appearance feature

Publications (1)

Publication Number Publication Date
WO2018170695A1 true WO2018170695A1 (zh) 2018-09-27

Family

ID=63583976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077366 WO2018170695A1 (zh) 2017-03-20 2017-03-20 一种表观特征的描述属性识别方法及装置

Country Status (7)

Country Link
US (1) US11410411B2 (zh)
EP (1) EP3591580A4 (zh)
JP (1) JP6936866B2 (zh)
KR (1) KR102331651B1 (zh)
CN (1) CN110678878B (zh)
BR (1) BR112019019517A8 (zh)
WO (1) WO2018170695A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709418A (zh) * 2020-06-02 2020-09-25 支付宝(杭州)信息技术有限公司 一种扫码过程中的提示方法、装置及设备

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598176A (zh) * 2017-09-30 2019-04-09 佳能株式会社 识别装置和识别方法
CN108921022A (zh) * 2018-05-30 2018-11-30 腾讯科技(深圳)有限公司 一种人体属性识别方法、装置、设备及介质
CN113727901A (zh) * 2019-04-05 2021-11-30 沃尔沃卡车集团 用于确定指示支撑车辆的路段的道路能力的参数的方法和控制单元
CN110060252B (zh) * 2019-04-28 2021-11-05 重庆金山医疗技术研究院有限公司 一种图片内目标提示处理方法、装置及内窥镜系统
CN110287856A (zh) * 2019-06-21 2019-09-27 上海闪马智能科技有限公司 一种执勤人员行为分析系统、方法及装置
CN111274945B (zh) * 2020-01-19 2023-08-08 北京百度网讯科技有限公司 一种行人属性的识别方法、装置、电子设备和存储介质
CN113989284B (zh) * 2021-12-29 2022-05-10 广州思德医疗科技有限公司 一种幽门螺杆菌辅助检测系统及检测装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390150A (zh) * 2012-05-08 2013-11-13 北京三星通信技术研究有限公司 人体部件检测方法和装置
CN103970771A (zh) * 2013-01-29 2014-08-06 中国科学院计算技术研究所 一种人体的检索方法和系统
CN104036231A (zh) * 2014-05-13 2014-09-10 深圳市菲普莱体育发展有限公司 人体躯干识别装置及方法、终点影像检测方法、装置

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3938005B2 (ja) 2002-10-23 2007-06-27 コニカミノルタビジネステクノロジーズ株式会社 画像処理装置および画像処理方法
GB0607143D0 (en) 2006-04-08 2006-05-17 Univ Manchester Method of locating features of an object
JP2012173761A (ja) * 2011-02-17 2012-09-10 Nifty Corp 情報処理装置、情報処理方法及びプログラム
CN103503029B (zh) * 2011-04-11 2016-08-17 英特尔公司 检测面部特性的方法
KR101302638B1 (ko) 2011-07-08 2013-09-05 더디엔에이 주식회사 머리의 제스처 및 손의 제스처를 감지하여 컨텐츠를 제어하기 위한 방법, 단말 장치 및 컴퓨터 판독 가능한 기록 매체
JP5950296B2 (ja) * 2012-01-27 2016-07-13 国立研究開発法人産業技術総合研究所 人物追跡属性推定装置、人物追跡属性推定方法、プログラム
CN103324907B (zh) * 2012-03-22 2016-09-07 中国科学院计算技术研究所 一种用于人体重现检测的人体表观模型的学习方法及系统
JP5712968B2 (ja) * 2012-05-31 2015-05-07 株式会社デンソー 人検出装置
US9278255B2 (en) 2012-12-09 2016-03-08 Arris Enterprises, Inc. System and method for activity recognition
WO2015042891A1 (zh) * 2013-09-27 2015-04-02 华为技术有限公司 图像语义分割的方法和装置
US10120879B2 (en) * 2013-11-29 2018-11-06 Canon Kabushiki Kaisha Scalable attribute-driven image retrieval and re-ranking
CN103810490B (zh) 2014-02-14 2017-11-17 海信集团有限公司 一种确定人脸图像的属性的方法和设备
CN103984919A (zh) * 2014-04-24 2014-08-13 上海优思通信科技有限公司 基于粗糙集与混合特征的人脸表情识别方法
CN104751454B (zh) 2015-03-11 2018-05-11 百度在线网络技术(北京)有限公司 一种用于确定图像中的人物轮廓的方法和装置
CN105160317B (zh) 2015-08-31 2019-02-15 电子科技大学 一种基于区域分块行人性别识别方法
US9460613B1 (en) * 2016-05-09 2016-10-04 Iteris, Inc. Pedestrian counting and detection at a traffic intersection based on object movement within a field of view
CN106127173B (zh) * 2016-06-30 2019-05-07 北京小白世纪网络科技有限公司 一种基于深度学习的人体属性识别方法
CN106203296B (zh) * 2016-06-30 2019-05-07 北京小白世纪网络科技有限公司 一种属性辅助的视频动作识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390150A (zh) * 2012-05-08 2013-11-13 北京三星通信技术研究有限公司 人体部件检测方法和装置
CN103970771A (zh) * 2013-01-29 2014-08-06 中国科学院计算技术研究所 一种人体的检索方法和系统
CN104036231A (zh) * 2014-05-13 2014-09-10 深圳市菲普莱体育发展有限公司 人体躯干识别装置及方法、终点影像检测方法、装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709418A (zh) * 2020-06-02 2020-09-25 支付宝(杭州)信息技术有限公司 一种扫码过程中的提示方法、装置及设备
CN111709418B (zh) * 2020-06-02 2022-03-04 支付宝(杭州)信息技术有限公司 一种扫码过程中的提示方法、装置及设备

Also Published As

Publication number Publication date
EP3591580A4 (en) 2020-03-18
JP6936866B2 (ja) 2021-09-22
CN110678878B (zh) 2022-12-13
BR112019019517A2 (pt) 2020-04-22
JP2020510264A (ja) 2020-04-02
US20200012880A1 (en) 2020-01-09
EP3591580A1 (en) 2020-01-08
BR112019019517A8 (pt) 2023-04-04
KR20190137815A (ko) 2019-12-11
US11410411B2 (en) 2022-08-09
KR102331651B1 (ko) 2021-11-30
CN110678878A (zh) 2020-01-10

Similar Documents

Publication Publication Date Title
WO2018170695A1 (zh) 一种表观特征的描述属性识别方法及装置
US11830141B2 (en) Systems and methods for 3D facial modeling
US10083366B2 (en) Edge-based recognition, systems and methods
US9547908B1 (en) Feature mask determination for images
JP5715833B2 (ja) 姿勢状態推定装置および姿勢状態推定方法
WO2019071664A1 (zh) 结合深度信息的人脸识别方法、装置及存储介质
WO2012077287A1 (ja) 姿勢状態推定装置および姿勢状態推定方法
CN104246793A (zh) 移动设备的三维脸部识别
WO2014043353A2 (en) Methods, devices and systems for detecting objects in a video
TW201605407A (zh) 瞳孔定位方法與裝置及其電腦程式產品
JP2012083855A (ja) 物体認識装置及び物体認識方法
US20190325208A1 (en) Human body tracing method, apparatus and device, and storage medium
JP4729188B2 (ja) 視線検出装置
JP2018073308A (ja) 認識装置、プログラム
CN111368674B (zh) 图像识别方法及装置
CN104156689B (zh) 一种对目标对象的特征信息进行定位的方法和设备
JPWO2012161291A1 (ja) 部位分離位置抽出装置、プログラム、方法
US11417063B2 (en) Determining a three-dimensional representation of a scene
US12002252B2 (en) Image matching system
JP5051671B2 (ja) 情報処理装置、情報処理方法およびプログラム
CN112183155B (zh) 动作姿态库建立、动作姿态生成、识别方法及装置
CN113658233A (zh) 三维人脸模型的非刚性配准方法、装置、设备及存储介质
US20240202931A1 (en) Measuring method and system for body-shaped data
CN102194110B (zh) 基于k-l变换和核相关系数的人脸图像中眼睛定位方法
KR101179969B1 (ko) 마커 검출 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17902197

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019551650

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112019019517

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2017902197

Country of ref document: EP

Effective date: 20191002

ENP Entry into the national phase

Ref document number: 20197030463

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112019019517

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20190919