CN106776619B - Method and device for determining attribute information of target object - Google Patents

Method and device for determining attribute information of target object Download PDF

Info

Publication number
CN106776619B
CN106776619B CN201510813083.0A CN201510813083A CN106776619B CN 106776619 B CN106776619 B CN 106776619B CN 201510813083 A CN201510813083 A CN 201510813083A CN 106776619 B CN106776619 B CN 106776619B
Authority
CN
China
Prior art keywords
target object
information
target
information associated
place
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510813083.0A
Other languages
Chinese (zh)
Other versions
CN106776619A (en
Inventor
高福亮
侯文�
李冰冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201510813083.0A priority Critical patent/CN106776619B/en
Publication of CN106776619A publication Critical patent/CN106776619A/en
Application granted granted Critical
Publication of CN106776619B publication Critical patent/CN106776619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method and a device for determining attribute information of a target object. One embodiment of the method comprises: acquiring a video monitoring image of a target area; detecting a target object in the video monitoring image, wherein the target object comprises a person in the target area; extracting feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image; analyzing and processing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object to determine the attribute information of the target object in the target area. The implementation mode utilizes richer related information of the target object, and realizes more accurate analysis of the attribute information of the target object.

Description

Method and device for determining attribute information of target object
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of information retrieval in video surveillance images, and in particular, to a method and an apparatus for determining attribute information of a target object.
Background
The user representation may be a collection of user attribute information, and may be a model that describes the attributes of the user. Currently, user profiles are generally constructed based on behavior log information obtained from a terminal associated with the user (e.g., the user's cell phone, personal computer, etc.). Such behavior log information may be customized by a user, for example, the user may input information such as sex, age, hobby, etc. on the terminal; the terminal may also mark and count the user's behavior, for example, the terminal may store network access records, positioning information, and track information of the user.
In the conventional user profile construction method, the acquisition of the behavior log information depends on a terminal of a user. However, the terminal cannot acquire some actual behavior information of the user, such as information for purchasing a certain brand of clothing using cash. The terminal cannot acquire the positioning information and the track information of the user even when the user does not use the positioning service application. Therefore, in the prior art, the user information used for constructing the user portrait is not comprehensive, and the accuracy of the constructed user portrait needs to be improved. In addition, since the user portrait construction based on the terminal is generally applicable to online applications, in an online application scene, the offline behavior information of the user cannot be analyzed in time, so that an accurate decision cannot be made for the real-time offline behavior of the user.
Disclosure of Invention
In view of the above, it is desirable to provide a method for constructing a user representation comprehensively and accurately. In order to solve the technical problem, the present application provides a method and an apparatus for determining attribute information of a target object.
In one aspect, the present application provides a method for determining attribute information of a target object. The method comprises the following steps: acquiring a video monitoring image of a target area; detecting a target object in the video monitoring image, wherein the target object comprises a person in the target area; extracting feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image; analyzing and processing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object to determine the attribute information of the target object in the target area.
In some embodiments, the acquiring a video surveillance image of a target site includes: and receiving the video monitoring image uploaded by the video monitoring image acquisition equipment in the target site.
In some embodiments, the detecting the target object in the video surveillance image comprises: performing feature extraction on the video monitoring image; and matching the extracted features with a pre-stored feature template of the target object to determine the target object in the video monitoring image.
In some embodiments, the extracting feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image includes: extracting characteristic information of the target object from the video monitoring image; extracting the identifier of the article with the distance to the target object being less than the preset distance from the video monitoring image; retrieving the article information associated with the target object from a pre-stored article information base based on the identification of the article with the distance to the target object smaller than a preset distance; extracting the mark of the place where the target object is located from the video monitoring image; the place where the target object is located comprises a place covering the geographic position of the target object; retrieving the location information associated with the target object in a pre-stored location information base based on the identification of the location covering the geographic location of the target object.
In some embodiments, the analyzing the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object in the target area includes: taking the stored characteristic information, historical article information and historical place information of the historical target object as training samples, and establishing an attribute information identification model; and inputting the characteristic information of the target object, the article information and the place information into the attribute information identification model to obtain the attribute information of the target object.
In some embodiments, the analyzing the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object in the target area includes: establishing a corresponding relation table based on the characteristic information of the historical target object, the article information associated with the historical target object, the place information associated with the historical target object and the attribute information of the marked historical target object; matching the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object with the characteristic information of the historical target object, the item information associated with the historical target object and the place information associated with the historical target object respectively; and taking the matched characteristic information of the history target object, the article information related to the history target object and the attribute information of the history target object corresponding to the place information related to the history target object as the attribute information of the target object.
In some embodiments, the method further comprises: and pushing information associated with the target area based on the attribute information of the target object.
In some embodiments, the feature information of the target object includes static feature information and behavior feature information; the static feature information includes at least one of: clothing style, makeup style, expressions, gestures, relative position information with other target objects; the behavior feature information includes at least one of: consumption behavior, dwell time, behavior trace information.
In some embodiments, the item information comprises at least one of: type, brand, grade, display position information and attention information; the location information includes at least one of: type, level, relative geographic location to other stores, heat information.
In some embodiments, the attribute information of the target object includes at least one of: age, gender, character, health status, purchasing power, social relationship with other target objects, interest information; and analyzing and processing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object to determine attribute information of the target object in the target area, wherein the attribute information comprises at least one of the following items: analyzing and obtaining one or more items of age, gender, character and health state of the target object based on the clothing style, makeup style, expression and posture of the target object; analyzing and determining social relation information of the target object based on the relative position information of the target object and other target objects; analyzing and obtaining purchasing power and/or interest information of the target object based on the consumption behavior, the residence time and the behavior track information of the target object and the item information and the place information.
In a second aspect, the present application provides an apparatus for determining attribute information of a target object. The device comprises: the acquisition unit is configured to acquire a video monitoring image of a target area; the detection unit is used for detecting a target object in the video monitoring image, wherein the target object comprises a person in the target area; an extraction unit configured to extract feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image; the processing unit is configured to analyze and process the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine attribute information of the target object in the target area.
In some embodiments, the acquiring unit is configured to acquire the video surveillance image of the target site as follows: and receiving the video monitoring image uploaded by the video monitoring image acquisition equipment in the target site.
In some embodiments, the detection unit is configured to detect a target object in the video surveillance image as follows: performing feature extraction on the video monitoring image; and matching the extracted features with a pre-stored feature template of the target object to determine the target object in the video monitoring image.
In some embodiments, the extracting unit is configured to extract the feature information of the target object, the item information associated with the target object, and the location information associated with the target object as follows: extracting characteristic information of the target object from the video monitoring image; extracting the identifier of the article with the distance to the target object being less than the preset distance from the video monitoring image; retrieving the article information associated with the target object from a pre-stored article information base based on the identification of the article with the distance to the target object smaller than a preset distance; extracting the mark of the place where the target object is located from the video monitoring image; the place where the target object is located comprises a place covering the geographic position of the target object; retrieving the location information associated with the target object in a pre-stored location information base based on the identification of the location covering the geographic location of the target object.
In some embodiments, the processing unit is configured to analyze the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object within the target area as follows: taking the stored characteristic information, historical article information and historical place information of the historical target object as training samples, and establishing an attribute information identification model; and inputting the characteristic information of the target object, the article information and the place information into the attribute information identification model to obtain the attribute information of the target object.
In some embodiments, the processing unit is configured to analyze the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object within the target area as follows: establishing a corresponding relation table based on the characteristic information of the historical target object, the article information associated with the historical target object, the place information associated with the historical target object and the attribute information of the marked historical target object; matching the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object with the characteristic information of the historical target object, the item information associated with the historical target object and the place information associated with the historical target object respectively; and taking the matched characteristic information of the history target object, the article information related to the history target object and the attribute information of the history target object corresponding to the place information related to the history target object as the attribute information of the target object.
In some embodiments, the apparatus further comprises: and the pushing unit is configured to push information associated with the target area based on the attribute information of the target object.
In some embodiments, the feature information of the target object includes static feature information and behavior feature information; the static feature information includes at least one of: clothing style, makeup style, expressions, gestures, relative position information with other target objects; the behavior feature information includes at least one of: consumption behavior, dwell time, behavior trace information.
In some embodiments, the item information comprises at least one of: type, brand, grade, display position information and attention information; the location information includes at least one of: type, level, relative geographic location to other stores, heat information.
In some embodiments, the attribute information of the target object includes at least one of: age, gender, character, health status, purchasing power, social relationship with other target objects, interest information; and the processing unit is used for analyzing and processing the characteristic information of the target object, the article information associated with the target object and the place information associated with the target object according to at least one of the following modes: analyzing and obtaining one or more items of age, gender, character and health state of the target object based on the clothing style, makeup style, expression and posture of the target object; analyzing and determining social relation information of the target object based on the relative position information of the target object and other target objects; analyzing and obtaining purchasing power and/or interest information of the target object based on the consumption behavior, the residence time and the behavior track information of the target object and the item information and the place information.
According to the method and the device for determining the attribute information of the target object, the video monitoring image in the target area is obtained, the target object in the video monitoring image is detected, the characteristic information of the target object, the article information related to the target object and the place information related to the target object are extracted based on the video monitoring image, and finally the extracted characteristic information of the target object, the article information related to the target object and the place information related to the target object are analyzed and processed to determine the attribute information of the target object in the target area.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for determining attribute information of a target object according to the present application;
FIG. 3 is a diagram illustrating an effect of the application scenario of the embodiment shown in FIG. 2;
FIG. 4 is a flow diagram of another embodiment of a method for determining attribute information of a target object according to the present application;
FIG. 5 is a diagram illustrating an effect of the application scenario of the embodiment shown in FIG. 4;
FIG. 6 is a schematic structural diagram illustrating an embodiment of an apparatus for determining attribute information of a target object according to the present application;
fig. 7 is a schematic structural diagram of a computer system suitable for implementing the terminal device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1, system architecture 100 may include video surveillance equipment 101, a server 102, terminal equipment 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between video surveillance device 101, server 102, terminal devices 103, and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The video monitoring apparatus 101 may be an apparatus for capturing video monitoring images installed inside or outside a building, and may be various cameras. The server 102 may be an electronic device for storing video surveillance images of a target area. Such as a back-end storage server connected to the video surveillance equipment. The terminal device 103 may be various electronic devices. A user may interact with the server 105 through the terminal device 103 to receive or send messages. Terminal devices 103 may include, but are not limited to: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio layer iii, motion Picture Experts compression standard Audio layer 3), MP4 players (Moving Picture Experts Group Audio layer IV, motion Picture Experts compression standard Audio layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides information support for the terminal device 103, and may be a cloud server. The server 105 may analyze data acquired from the video monitoring apparatus 101 and the server 102 and feed back the analysis result to the terminal apparatus 103 through the network 104.
It should be noted that the method for determining the attribute information of the target object provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the apparatus for determining the attribute information of the target object is generally disposed in the server 105.
It should be understood that the number of video surveillance devices, terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of video surveillance devices, terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for determining attribute information of a target object in accordance with the present application is shown. The method for determining the attribute information of the target object comprises the following steps:
step 201, acquiring a video monitoring image of a target area.
In this embodiment, the target area may be a preset area. In a practical scenario, the target area may be a street, all or part of the area covered by one or more buildings. The target area can be provided with video monitoring equipment for acquiring video monitoring images in the target area. The video surveillance images may be a sequence of images containing information of people, objects, places within the monitored area.
In this embodiment, an electronic device (e.g., the server 105 in fig. 1) on which the method for determining attribute information of a target object operates may acquire a video surveillance image of a target area through a network. The acquisition may be called from a storage device (e.g., server 102 in fig. 1) storing the video surveillance images of the target area, for example, by receiving video data transmitted from the storage device via a network. In some optional implementation manners of this embodiment, the obtaining of the video surveillance image of the target location may include receiving the video surveillance image uploaded to a server of the cloud platform by a video surveillance image acquisition device in the target location. Namely, the video monitoring image in the target area can be obtained by receiving the data collected by the image collecting device at the server of the cloud platform. In an actual scene, after an image capturing device (e.g., a monitoring camera) captures a video surveillance image of a target area by an optical imaging manner, the captured video surveillance image may be transmitted to a server of a cloud platform through a network. The server of the cloud platform can acquire the monitoring video monitoring image of the target area in real time by receiving the video monitoring image.
Step 202, detecting a target object in the video monitoring image.
Wherein the target object includes a person within the target area. In the present embodiment, a person in the target area is set as an object of analysis. The target object may first be extracted from the video surveillance image by detecting the target object. The detection of the target object may employ various pedestrian detection methods, such as an edge detection method combined with image segmentation, an optical flow detection method, a neural network-based method. As an example, detecting the target object by using the method of edge detection combined with image segmentation may be performed as follows: firstly, determining edges in an image based on the gray scale of the image, then performing morphological processing such as dilation and erosion on the edges, then matching the edge image with a template established based on human morphological characteristics (such as aspect ratio, proportion of head size to whole body, and the like), taking the edges successfully matched as the edges of a target object, and extracting the target object from a video monitoring image by adopting an image segmentation method such as binarization and the like.
In this embodiment, detecting the target object in the video surveillance image may include: and tracking the target object based on the video monitoring image. The electronic equipment can extract the characteristics of the target object from one video monitoring image, match the characteristics of the target object in other multiple video monitoring images and determine the behavior track of the target object from the successfully matched video monitoring images.
In some optional implementations of this embodiment, detecting the target object in the video surveillance image may include: performing feature extraction on the video monitoring image; and matching the extracted features with a pre-stored feature template of the target object to determine the target object in the video monitoring image. The electronic device may perform Feature extraction on the video surveillance image based on algorithms such as a gaussian filter and Scale-invariant Feature Transform (SIFT), so as to obtain one or more Feature vectors. And then, calculating the matching degree of the feature vector and a pre-stored character feature vector template through Euclidean distance and the like, and if the matching degree exceeds a set threshold value, determining that the extracted feature vector corresponds to a target object in the video monitoring image. The image can then be segmented based on the feature vectors to extract the target object from the video surveillance image.
It should be noted that the video surveillance image in this embodiment may include one or more target objects. Then in step 202 one or more target objects may be detected. The number of target objects to be detected may be pre-configured. The electronic device may automatically detect all target objects in the video surveillance image without pre-configuring the number of target objects to be detected.
Step 203, extracting feature information of the target object, article information associated with the target object and place information associated with the target object based on the video monitoring image.
The feature information of the target object may be external feature information of the target object, such as appearance, form, and the like. After determining the target object, the electronic device may extract feature information of the target object in various ways. For example, gray scale information of a target object in a video surveillance image is taken as a part of the feature information. And also for example, feature information of the target object is recognized based on the image recognition. The feature points or the feature vectors of the target object can be extracted from the video monitoring image, and the feature points or the feature vectors of the target object are classified based on the classifier, so that the feature information of the target object is obtained.
In some embodiments, the feature information of the target object may include static feature information and behavioral feature information. The static feature information may include at least one of: clothing style, makeup style, expressions, gestures, relative position information with other target objects. The behavior feature information includes dynamic feature information of the target object, and may include at least one of the following: consumption behavior, dwell time, behavior trace information.
The static feature information and the behavior feature information of the target object may include individual feature information obtained by performing individual analysis on each target object and group feature information obtained based on the degree of closeness between a plurality of target objects.
In some optional implementations of this embodiment, the individual feature information may include information directly derived from surface features of the target object, or may include information extracted based on behavior analysis of the target object, such as clothing style, makeup style, expressions, pose information, consumption behavior, dwell time, behavior trajectory, and the like. Wherein the expression and the makeup style can be extracted from the face area of the target object in the video surveillance image, for example, the makeup style can be extracted based on the color of five sense organs, and the expression can be extracted based on the relative position of the five sense organs; the clothing style can be extracted from the limb area of the target object in the video monitoring image, for example, the clothing style can be extracted based on the length, width and color of the clothing; the posture information may be extracted from the relative position of the body trunk of the target subject in the video surveillance image, and whether the target subject is in a standing posture may be determined based on whether the arm is perpendicular to the leg, for example. The consumption behavior may be extracted based on whether the target object is staying at a cash register or an existing behavior detection algorithm, for example, whether the target object has an action of taking out a bank card or cash; the dwell time may be obtained by the imaging time of the sequence video surveillance images; the behavior trajectory may be extracted based on a position where the target object appears in the plurality of video surveillance images.
In some optional implementations of the embodiment, the group feature information may include individual feature information common to a plurality of target objects, for example, the same clothing style of the plurality of target objects. The group characteristic information may also include relative position information of the target object and other target objects. The electronic equipment can calculate the actual relative position information of the target object and other target objects based on the position relation of the plurality of target objects in the image and the calibrated camera parameters.
In this embodiment, the item associated with the target object includes an item whose distance from the target object is less than a preset distance. The electronic equipment can extract a plurality of articles from the image, calculate the relative distance between the extracted articles and the target object, then obtain the three-dimensional space coordinates of the articles and the three-dimensional space coordinates of the target object based on camera calibration, and then calculate the distance between each article and the target object. The item may then be considered an item associated with the target object. Information of the item associated with the target object, including color, shape, size, etc., may be extracted from the image. Alternatively, the distance between each article and the target object may be calculated in the image coordinate system, and the article whose distance between each article and the target object is calculated to be less than the preset distance in the image coordinate system may be used as the article associated with the target object to extract the article information. Optionally, the item information may include at least one of: type, brand, grade, show location information, attention information. The display position information can be extracted based on the image and can comprise the relative position information of the article and other articles. The degree-of-interest information can be obtained by searching for the item in the network, purchasing amount, and the like.
In some optional implementations of the present embodiment, the article information associated with the target object may be extracted as follows: firstly, an identification of an article, such as a trademark, a name, a number and the like of the article, with a distance from a target object smaller than a preset distance is extracted from a video monitoring image, and then information of the article corresponding to the identification is retrieved from a pre-stored article information base as article information associated with the target object based on the identification of the article with the distance from the target object smaller than the preset distance. The pre-stored article information base can store the identification and other information of each article, and the identification and other information can be obtained by the electronic equipment through various ways, including searching from a network, receiving article information uploaded by a client and the like.
In this embodiment, the place associated with the target object may include a place covering the geographic location of the target object. In an actual scenario, the place associated with the target object may be the place where the target object is located. The electronic device may extract ambient environment information of the target object as location information associated with the target object. Such as geographical location information of a street where the target object is located, layout information of a place where the target object is located, and the like, may be extracted as place information associated with the target object. Optionally, the venue information may include at least one of: type, level, relative geographic location to other stores, heat information. The type may be classification information of a place, and may include types of a hotel, a restaurant, entertainment, shopping, sports, and the like, for example. The rating may be rating information of the venue, for example, a star rating of a hotel may be taken as the rating, a web rating or consumption level rating of a restaurant may be taken as the rating of the restaurant. The relative geographic location to other stores may be extracted from one video surveillance image or may be extracted based on a succession of video surveillance images. The heat information may be attention information of a place, such as people flow information, network search volume information.
In some optional implementations of the present embodiment, the location information associated with the target object may be extracted by: the method comprises the steps of firstly extracting an identifier of a place covering the geographical position of a target object from a video monitoring image, and then retrieving place information associated with the target object from a pre-stored place information base on the basis of the identifier of the place covering the geographical position of the target object. The locations in the image can be identified, the range covered by each location is determined, then the position of the target object in the image is calculated, whether the range covered by each location contains the position of the target object in the image is judged, and if yes, the location is determined to be the location associated with the target object. The identification of the location (e.g., road sign, building name, store name) may then be extracted from the video surveillance image. The information corresponding to the identifier of the location in the pre-stored location information base can be searched in various ways as the information associated with the target object. For example, the location information stored in the web server may be searched for via a network, or the corresponding location information may be acquired by retrieving the identifier of the location from the location information uploaded by the client.
Step 204, analyzing the characteristic information of the target object, the article information associated with the target object and the place information associated with the target object to determine the attribute information of the target object in the target area.
After extracting the feature information of the target object, the article information associated with the target object, and the location information associated with the target object, the extracted information may be analyzed to obtain potential information of the target object as the attribute information of the target object.
In this embodiment, the extracted information may be analyzed by a variety of analysis methods, such as a statistical analysis method and a machine learning analysis method.
In some optional implementation manners of this embodiment, the electronic device may store feature information, historical item information, and historical location information of the historical target object, and manually mark attribute information of the corresponding historical target object. The information extracted in step 203 may be analyzed as follows: and establishing a training sample based on the stored characteristic information, historical article information and historical place information of the historical target object, and establishing an attribute information recognition model based on the training sample. The attribute information recognition model may recognize attribute information of the target object based on the input feature information, article information, and location information of the target object. Further, the attribute information identification model may also be optimized based on the attribute information of the tagged historical target objects. And inputting the information extracted in the step 203 into an attribute information identification model to obtain the attribute information of the target object.
In some optional implementations of this embodiment, the information extracted in step 203 may be analyzed as follows: establishing a corresponding relation table based on the characteristic information of the historical target object, the article information associated with the historical target object, the place information associated with the historical target object and the attribute information of the marked historical target object; matching the characteristic information of the target object, the article information associated with the target object and the place information associated with the target object with the characteristic information of the historical target object, the article information associated with the historical target object and the place information associated with the historical target object respectively; and taking the matched characteristic information of the history target object, the article information related to the history target object and the attribute information of the history target object corresponding to the place information related to the history target object as the attribute information of the target object. A correspondence table of the feature information, the article information, and the location information of the target object and the attribute information of the target object may be created based on the statistical result of the history data, and the attribute information of the target object corresponding to the feature information, the article information, and the location information of the target object extracted in step 203 may be searched for in the correspondence table.
In some optional implementations of this embodiment, the attribute information of the target object may include at least one of: age, gender, character, health status, purchasing power, social relationship with other target objects, interest information. One or more of the age, gender, character, health status of the target object may be derived based on the target object's clothing style, makeup style, expression, and pose analysis. For example, the age and gender of the target object may be derived by classifier analysis based on facial features of the target object and the type of item of interest of the target object; the character can be obtained by deep analysis of expression, posture, clothing color and the like, and as an example, if the clothing color of the target object is bright, the character of the target object can be considered as outward; the health state can be obtained by analyzing the skin color, the expression, the body posture and the like of the target object. The social relationship information of the target object may also be determined based on the relative position information of the target object and other target objects, for example, whether the target object has family or friend relationships with other target objects may be determined by determining whether the relative position information satisfies a preset condition. Alternatively, the social relationship information between the plurality of target objects may be judged based on the degree of similarity of the clothing styles of the plurality of target objects, and for example, the relationship between a child and an adult wearing clothes of the same style and different sizes may be determined as a family relationship. And the purchasing power and/or interest information of the target object can be analyzed and obtained based on the consumption behavior, the stay time and the behavior track information of the target object, the article information and the place information. Specifically, the purchasing power may be obtained by analyzing the consumption behavior of the target object, information of an article for which the consumption behavior of the target object is directed, and information of a place where the consumption behavior is located; the interest information can be obtained by comprehensively analyzing the clothing, the appearance characteristics, the information of the articles targeted by the consumption behaviors, the stay time, the track information, the information of the stayed places and the like of the user.
By performing the above processing on each video monitoring image in the target area, the attribute information of all target objects in the target area can be acquired.
The method for determining the attribute information of the target object provided by the embodiment can be applied to user image construction of an offline scene. With further reference to fig. 3, an effect diagram of the application scenario of the embodiment shown in fig. 2 is shown. As shown in fig. 3, the monitoring device in the mall may collect video data including user images and upload the video data to the cloud platform server. The cloud platform server may analyze and process the video data, for example, may detect the user from the image through an algorithm of pedestrian detection, identify an external form, a behavior track, a consumption behavior, and the like of the user, and identify an article viewed or contacted by the user in real time, and identify information of a shop where the user enters. And then the server of the cloud platform can determine the attribute information of the user by adopting a machine learning method based on the identified information to construct the user portrait. The constructed user representation may contain interest information of the user. The server of the cloud platform can feed the user portrait back to merchants of all shops in the merchant, so that the merchants can know the characters, interests and other characteristics of the users in real time, and targeted service is performed. The server of the cloud platform can also perform comprehensive statistical analysis on the acquired user portrait with large data volume, and a market can make business decisions according to the statistical analysis result. For example, the probability that the user consumes in both stores can be counted, and when the probability exceeds a preset probability threshold, the shopping mall can rearrange the positions of the stores, so as to shorten the relative distance between the two stores.
According to the method provided by the embodiment of the application, the target object is extracted in real time by using the video monitoring image, the external characteristics, the behavior characteristics and the information related to the target object of the target object are fully utilized, and the accuracy and the comprehensiveness of the determined attribute information of the target object are improved. Meanwhile, the characteristic information and other related information of the target object are extracted from the actual image, and the downlink of the user is analyzed, so that the reliability of the information source of the target object is improved, the more accurate analysis of the attribute information of the target object is realized, and the accuracy of the determined attribute information of the target object is improved.
With continued reference to FIG. 4, shown is a flow diagram of another embodiment of a method for determining attribute information of a target object in accordance with the present application. The method flow 400 for determining attribute information of a target object includes the steps of:
step 401, acquiring a video monitoring image of a target area.
In this embodiment, the target area may be a preset area. In a practical scenario, the target area may be a street, all or part of the area covered by one or more buildings. The target area can be provided with video monitoring equipment for acquiring video monitoring images in the target area.
In this embodiment, an electronic device (e.g., the server 105 in fig. 1) on which the method for determining attribute information of a target object operates may acquire a video surveillance image of a target area through a network. The acquisition may be called from a storage device (e.g., server 102 in fig. 1) storing the video surveillance images of the target area, for example, by receiving video data transmitted from the storage device via a network. In an actual scene, after an image capturing device (e.g., a monitoring camera) captures a video surveillance image of a target area by an optical imaging manner, the captured video surveillance image may be transmitted to a server of a cloud platform through a network. The server of the cloud platform can acquire the monitoring video monitoring image of the target area in real time by receiving the video monitoring image.
Step 402, detecting a target object in a video monitoring image.
Wherein the target object includes a person within the target area. In the present embodiment, a person in the target area is set as an object of analysis. The target object may first be extracted from the video surveillance image by detecting the target object. The detection of the target object may employ various pedestrian detection methods, such as an edge detection method combined with image segmentation, an optical flow detection method, a neural network-based method.
In this embodiment, detecting the target object in the video surveillance image may include: and tracking the target object based on the video monitoring image. The electronic equipment can extract the characteristics of the target object from one video monitoring image, match the characteristics of the target object in other multiple video monitoring images and determine the behavior track of the target object from the successfully matched video monitoring images.
Step 403, extracting feature information of the target object, item information associated with the target object and location information associated with the target object based on the video surveillance image.
The feature information of the target object may be external feature information of the target object, such as appearance, form, and the like. After determining the target object, the electronic device may extract feature information of the target object in various ways. For example, gray scale information of a target object in a video surveillance image is taken as a part of the feature information. And also for example, feature information of the target object is recognized based on the image recognition. The feature points or the feature vectors of the target object can be extracted from the video monitoring image, and the feature points or the feature vectors of the target object are classified based on the classifier, so that the feature information of the target object is obtained.
In this embodiment, the item associated with the target object includes an item whose distance from the target object is less than a preset distance. The electronic equipment can extract a plurality of articles from the image, calculate the relative distance between the extracted articles and the target object, then obtain the three-dimensional space coordinates of the articles and the three-dimensional space coordinates of the target object based on camera calibration, and then calculate the distance between each article and the target object. The item may then be considered an item associated with the target object. Information of the item associated with the target object, including color, shape, size, etc., may be extracted from the image. Alternatively, the distance between each article and the target object may be calculated in the image coordinate system, and the article whose distance between each article and the target object is calculated to be less than the preset distance in the image coordinate system may be used as the article associated with the target object to extract the article information. Further, the item information associated with the target object can be extracted by searching the item information corresponding to the identification of the item in the network based on the identification of the item associated with the target object in the video surveillance image.
In this embodiment, the place associated with the target object may include a place covering the geographic location of the target object. In an actual scenario, the place associated with the target object may be the place where the target object is located. The electronic device may extract ambient environment information of the target object as location information associated with the target object. Alternatively, an identification of a place covering the geographical position of the target object may be extracted from the video surveillance image, and then, based on the identification, place information associated with the target object is retrieved from a pre-stored place information base.
Step 404, analyzing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object to determine the attribute information of the target object in the target area.
After extracting the feature information of the target object, the article information associated with the target object, and the location information associated with the target object, the extracted information may be analyzed to obtain potential information of the target object as the attribute information of the target object. Specific analysis processing methods may include, but are not limited to, a method of classifying by machine learning and a statistical analysis method. The method for learning by the machine comprises the steps of establishing an attribute information identification model of a target object according to historical data, inputting feature information of each target object in a target area, article information and place information which are related to the target object into the attribute information identification model, and obtaining attribute information of each target object in the target area. The statistical analysis-based method includes establishing a correspondence table of feature information of target objects, item information associated with the target objects, location information associated with the target objects, and attribute information of the target objects based on historical data, and finding the information extracted in step 403 in the correspondence table to obtain the attribute information of each target object in the target area.
Optionally, the attribute information of the target object may include at least one of: age, gender, character, health status, purchasing power, social relationship with other target objects, interest information.
Step 405, pushing information associated with the target area based on the attribute information of the target object.
In this embodiment, the electronic device may push the relevant content to the target object according to the attribute information of the target object obtained in step 404. Wherein, the pushed related content may be preset content. Specifically, information of an item or a place that may be interested by the target object may be pushed according to the interest information of the target object, and information that other target objects having similar ages, sexes and health states are interested in may also be pushed according to the ages, sexes and health states of the target object.
In this embodiment, the electronic device may calculate a matching degree between the information to be pushed associated with the target area and the attribute information of the target object. The matching degree may be a matching degree between the category of the information to be pushed and the attribute information of the target object. And pushing one or more pieces of information with the highest matching degree to the target object. The push may include sending to the terminal in the target area over a network. Alternatively, the terminal in the target area may include a smart terminal of the target object, such as a mobile phone, a tablet computer, a smart watch, and the like. The terminal may present the pushed information in the form of an image, text, or audio after receiving the pushed information. The target object can acquire the pushed information through the terminals. As an example, when a user enters a certain store to purchase a certain type of goods, the server of the cloud platform may obtain the appearance features, clothing features, accessory features, time spent in the store, related information of the store, and information of the price and model of the goods of the user based on the video surveillance images, and identify the interest point of the user in a machine learning manner. The cloud platform server may then send the identified points of interest of the user to clients of other merchants. Other merchants may recommend corresponding goods to the user based on the points of interest of the user. For example, when a user purchases a certain brand of electronic product, information about the products of other new electronic products may be pushed to the user, and advertisements of electronic products of the same brand may be pushed to other users who have similar ages and wear to the user. Further, in the above scenario, if a user purchases a certain product, the cloud platform server may derive the age characteristics of the user based on the characteristic analysis of a large number of users who purchase the product, and push the products that are liked by other users of the age group to the user.
With further reference to fig. 5, an effect diagram of the application scenario of the embodiment shown in fig. 4 is shown. As shown in fig. 4, the video monitoring device in the shopping mall may collect the monitoring image of the whole shopping mall, and then upload the monitoring image to the cloud platform server. The cloud platform server can analyze and process the monitoring image, for example, the trajectory of the user can be tracked, the characteristics of the user, the characteristics of the commodity contacted by the user and the characteristics of the shop where the user stays can be analyzed, and the characteristics are analyzed through a machine learning or statistical analysis method to obtain the user image of the user, including the age, sex, hobbies, health status, consumption level of the user, whether the user has family or friend relationship with other users, and the like. Thereafter, the server of the cloud platform may push information about the goods or stores within the store to the user based on the user representation. For example, in an actual scenario, if a server of a cloud platform in the integrated service center recognizes that interest information of a target object includes sports, counseling of fitness or discount information of sports goods may be pushed thereto.
In this embodiment, the actual image of the target object is used as a basis for determining the attribute information, so that the problem that the user attribute information is inaccurate due to false information in a method for constructing the attribute information of the target object based on the behavior log information stored in the terminal can be solved. Under the condition of not depending on the mobile terminal, the attribute information of the target object can be accurately determined.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for determining attribute information of a target object in the present embodiment adds a step of pushing association information of a target area. Therefore, the scheme described by the embodiment can realize more targeted information push.
With further reference to fig. 6, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for determining an attribute of a target object, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices in particular.
As shown in fig. 6, the apparatus 600 for determining attribute information of a target object according to the present embodiment includes: an acquisition unit 601, a detection unit 602, an extraction unit 603, and a processing unit 604. The acquiring unit 601 is configured to acquire a video surveillance image of a target area; the detection unit 602 is configured to detect a target object in the video surveillance image, where the target object includes a person in a target area; the extraction unit 603 is configured to extract feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image; the processing unit 604 is configured to perform analysis processing on feature information of the target object, item information associated with the target object, and location information associated with the target object to determine attribute information of the target object within the target area.
In this embodiment, the target area may be a preset area, and a video monitoring device may be disposed in the target area and configured to acquire a video monitoring image in the target area. The acquisition unit 601 may acquire a video surveillance image of a target area through a network. The obtaining mode may be calling from a storage device storing the video monitoring image of the target area, for example, receiving the video data transmitted by the storage device through a network, or directly receiving the video data transmitted by the video monitoring device.
In the present embodiment, the detection unit 602 may extract the target object from the video surveillance image acquired by the acquisition unit 601 by detecting the target object. The detection of the target object may employ various pedestrian detection methods, such as an edge detection method combined with image segmentation, an optical flow detection method, a neural network-based method. One option includes: and performing feature billiard on the video monitoring image, matching the extracted features with a pre-stored feature template of the target object, and taking an image area corresponding to the successfully matched features as an image area of the target object to realize the detection of the target object in the video monitoring image.
The extraction unit 603 may extract feature information of the target object detected by the detection unit 602 by using a feature extraction method or the like based on the video surveillance image acquired by the acquisition unit 601. The extraction unit 603 may further extract item information associated with the target object and location information associated with the target object based on a relative positional relationship between the target object and the item and the location in the video surveillance.
The processing unit 604 can perform analysis processing based on the information extracted by the extraction unit 603. The analysis processing mode includes but is not limited to a machine learning mode and a statistical analysis mode. Wherein, the analysis processing by the machine learning mode can be executed as follows: and establishing an attribute information identification model of the target object according to the historical data, and inputting the characteristic information of each target object in the target area, the article information and the place information which are associated with the target object into the attribute information identification model to obtain the attribute information of each target object in the target area. The specific implementation of the statistical analysis mode may include: a correspondence table of feature information of the target object, item information associated with the target object, location information associated with the target object, and attribute information of the target object is created based on the history data, and the attribute information of each target object in the target area is found by retrieving the information extracted by the extraction unit 603 in the correspondence table.
In some optional implementations of the embodiment, the apparatus 600 for determining the attribute information of the target object may further include a pushing unit configured to push information associated with the target area based on the attribute information of the target object.
It will be understood that the elements described in apparatus 600 correspond to various steps in the method described with reference to fig. 2. Thus, the operations and features described above for the method are equally applicable to the apparatus 600 and the units included therein, and are not described in detail here.
Those skilled in the art will appreciate that the above-described apparatus 600 for determining attribute information of a target object also includes some other well-known structure, such as a processor, memory, etc., which is not shown in fig. 6 in order to not unnecessarily obscure embodiments of the present disclosure.
The apparatus for determining attribute information of a target object provided by the above embodiment of the present application can determine the attribute information of the target object based on more comprehensive user characteristics, and improves the accuracy of the determined attribute information of the target object.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing a terminal device or server of an embodiment of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a detection unit, an extraction unit, and a processing unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit for acquiring a video surveillance image of a target area".
As another aspect, the present application also provides a non-volatile computer storage medium, which may be the non-volatile computer storage medium included in the apparatus in the above-described embodiments; or it may be a non-volatile computer storage medium that exists separately and is not incorporated into the terminal. The non-transitory computer storage medium stores one or more programs that, when executed by a device, cause the device to: acquiring a video monitoring image of a target area; detecting a target object in the video monitoring image, wherein the target object comprises a person in the target area; extracting feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image; analyzing and processing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object to determine the attribute information of the target object in the target area.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (18)

1. A method for determining attribute information of a target object, the method comprising:
acquiring a video monitoring image of a target area;
detecting a target object in the video monitoring image, wherein the target object comprises a person in the target area;
extracting feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image;
analyzing and processing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object to determine attribute information of the target object in the target area;
pushing information associated with the target area based on the attribute information of the target object;
wherein extracting location information associated with the target object based on the video surveillance image comprises:
extracting the mark of the place where the target object is located from the video monitoring image; the place where the target object is located comprises a place covering the geographic position of the target object;
retrieving the location information associated with the target object in a pre-stored location information base based on the identification of the location covering the geographic location of the target object.
2. The method of claim 1, wherein said obtaining video surveillance images of a target site comprises:
and receiving the video monitoring image uploaded by the video monitoring image acquisition equipment in the target site.
3. The method of claim 1, wherein the detecting the target object in the video surveillance image comprises:
performing feature extraction on the video monitoring image;
and matching the extracted features with a pre-stored feature template of the target object to determine the target object in the video monitoring image.
4. The method of claim 1, wherein the extracting feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image comprises:
extracting characteristic information of the target object from the video monitoring image;
extracting the identifier of the article with the distance to the target object being less than the preset distance from the video monitoring image;
and retrieving the article information associated with the target object from a pre-stored article information base based on the identification of the article with the distance to the target object smaller than the preset distance.
5. The method according to any one of claims 1 to 4, wherein the analyzing the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object in the target area comprises:
taking the stored characteristic information, historical article information and historical place information of the historical target object as training samples, and establishing an attribute information identification model;
and inputting the characteristic information of the target object, the article information and the place information into the attribute information identification model to obtain the attribute information of the target object.
6. The method according to any one of claims 1 to 4, wherein the analyzing the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object in the target area comprises:
establishing a corresponding relation table based on the characteristic information of the historical target object, the article information associated with the historical target object, the place information associated with the historical target object and the attribute information of the marked historical target object;
matching the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object with the characteristic information of the historical target object, the item information associated with the historical target object and the place information associated with the historical target object respectively;
and taking the matched characteristic information of the history target object, the article information related to the history target object and the attribute information of the history target object corresponding to the place information related to the history target object as the attribute information of the target object.
7. The method according to one of claims 1 to 4, wherein the feature information of the target object comprises static feature information and behavior feature information;
the static feature information includes at least one of: clothing style, makeup style, expressions, gestures, relative position information with other target objects;
the behavior feature information includes at least one of: consumption behavior, dwell time, behavior trace information.
8. The method of claim 7, wherein the item information comprises at least one of: type, brand, grade, display position information and attention information;
the location information includes at least one of: type, level, relative geographic location to other stores, heat information.
9. The method of claim 8, wherein the attribute information of the target object comprises at least one of: age, gender, character, health status, purchasing power, social relationship with other target objects, interest information; and
the analyzing and processing the feature information of the target object, the item information associated with the target object and the place information associated with the target object to determine the attribute information of the target object in the target area includes at least one of the following:
analyzing and obtaining one or more items of age, gender, character and health state of the target object based on the clothing style, makeup style, expression and posture of the target object;
analyzing and determining social relation information of the target object based on the relative position information of the target object and other target objects;
analyzing and obtaining purchasing power and/or interest information of the target object based on the consumption behavior, the residence time and the behavior track information of the target object and the item information and the place information.
10. An apparatus for determining attribute information of a target object, the apparatus comprising:
the acquisition unit is configured to acquire a video monitoring image of a target area;
the detection unit is used for detecting a target object in the video monitoring image, wherein the target object comprises a person in the target area;
an extraction unit configured to extract feature information of the target object, item information associated with the target object, and location information associated with the target object based on the video surveillance image;
the processing unit is configured to analyze and process the feature information of the target object, the item information associated with the target object and the place information associated with the target object so as to determine attribute information of the target object in the target area;
a pushing unit configured to push information associated with the target area based on attribute information of the target object;
wherein the extraction unit is configured to extract the location information associated with the target object as follows:
extracting the mark of the place where the target object is located from the video monitoring image; the place where the target object is located comprises a place covering the geographic position of the target object;
retrieving the location information associated with the target object in a pre-stored location information base based on the identification of the location covering the geographic location of the target object.
11. The apparatus of claim 10, wherein the obtaining unit is configured to obtain the video surveillance image of the target site as follows:
and receiving the video monitoring image uploaded by the video monitoring image acquisition equipment in the target site.
12. The apparatus of claim 10, wherein the detection unit is configured to detect the target object in the video surveillance image as follows:
performing feature extraction on the video monitoring image;
and matching the extracted features with a pre-stored feature template of the target object to determine the target object in the video monitoring image.
13. The apparatus according to claim 10, wherein the extracting unit is configured to extract the feature information of the target object and the item information associated with the target object as follows:
extracting characteristic information of the target object from the video monitoring image;
extracting the identifier of the article with the distance to the target object being less than the preset distance from the video monitoring image;
and retrieving the article information associated with the target object from a pre-stored article information base based on the identification of the article with the distance to the target object smaller than the preset distance.
14. The apparatus according to one of claims 10 to 13, wherein the processing unit is configured to analyze the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object within the target area as follows:
taking the stored characteristic information, historical article information and historical place information of the historical target object as training samples, and establishing an attribute information identification model;
and inputting the characteristic information of the target object, the article information and the place information into the attribute information identification model to obtain the attribute information of the target object.
15. The apparatus according to one of claims 10 to 13, wherein the processing unit is configured to analyze the feature information of the target object, the item information associated with the target object, and the location information associated with the target object to determine the attribute information of the target object within the target area as follows:
establishing a corresponding relation table based on the characteristic information of the historical target object, the article information associated with the historical target object, the place information associated with the historical target object and the attribute information of the marked historical target object;
matching the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object with the characteristic information of the historical target object, the item information associated with the historical target object and the place information associated with the historical target object respectively;
and taking the matched characteristic information of the history target object, the article information related to the history target object and the attribute information of the history target object corresponding to the place information related to the history target object as the attribute information of the target object.
16. The apparatus according to one of claims 10 to 13, wherein the feature information of the target object comprises static feature information and behavior feature information;
the static feature information includes at least one of: clothing style, makeup style, expressions, gestures, relative position information with other target objects;
the behavior feature information includes at least one of: consumption behavior, dwell time, behavior trace information.
17. The apparatus of claim 16, wherein the item information comprises at least one of: type, brand, grade, display position information and attention information;
the location information includes at least one of: type, level, relative geographic location to other stores, heat information.
18. The apparatus of claim 17, wherein the attribute information of the target object comprises at least one of: age, gender, character, health status, purchasing power, social relationship with other target objects, interest information; and
the processing unit is used for analyzing and processing the characteristic information of the target object, the item information associated with the target object and the place information associated with the target object according to at least one of the following modes:
analyzing and obtaining one or more items of age, gender, character and health state of the target object based on the clothing style, makeup style, expression and posture of the target object;
analyzing and determining social relation information of the target object based on the relative position information of the target object and other target objects;
analyzing and obtaining purchasing power and/or interest information of the target object based on the consumption behavior, the residence time and the behavior track information of the target object and the item information and the place information.
CN201510813083.0A 2015-11-20 2015-11-20 Method and device for determining attribute information of target object Active CN106776619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510813083.0A CN106776619B (en) 2015-11-20 2015-11-20 Method and device for determining attribute information of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510813083.0A CN106776619B (en) 2015-11-20 2015-11-20 Method and device for determining attribute information of target object

Publications (2)

Publication Number Publication Date
CN106776619A CN106776619A (en) 2017-05-31
CN106776619B true CN106776619B (en) 2020-09-04

Family

ID=58886108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510813083.0A Active CN106776619B (en) 2015-11-20 2015-11-20 Method and device for determining attribute information of target object

Country Status (1)

Country Link
CN (1) CN106776619B (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107480265B (en) * 2017-08-17 2021-02-09 广州视源电子科技股份有限公司 Data recommendation method, device, equipment and storage medium
CN109711151B (en) * 2017-10-25 2021-08-20 武汉安天信息技术有限责任公司 Method, system and device for predicting adverse behaviors of application program
CN107832795B (en) * 2017-11-14 2021-07-27 深圳码隆科技有限公司 Article identification method and system and electronic equipment
CN107992252B (en) * 2017-11-28 2020-12-22 网易(杭州)网络有限公司 Information prompting method and device, electronic equipment and storage medium
CN108037699B (en) * 2017-12-12 2020-04-07 深圳市天颐健康科技有限公司 Robot, control method of robot, and computer-readable storage medium
CN107992839A (en) * 2017-12-12 2018-05-04 北京小米移动软件有限公司 Person tracking method, device and readable storage medium storing program for executing
CN108234980A (en) * 2017-12-28 2018-06-29 北京小米移动软件有限公司 Image processing method, device and storage medium
CN108280368B (en) * 2018-01-22 2020-02-14 北京腾云天下科技有限公司 Correlation method of online data and offline data and computing equipment
CN108279573B (en) * 2018-02-05 2019-05-28 北京儒博科技有限公司 Control method, device, intelligent appliance and medium based on human body detection of attribute
CN108563675B (en) * 2018-02-28 2021-03-30 北京图铭视界科技有限公司 Electronic file automatic generation method and device based on target body characteristics
JP7080079B2 (en) * 2018-03-19 2022-06-03 本田技研工業株式会社 Information providing device and its control method
CN108648338B (en) * 2018-04-12 2021-01-12 广州杰赛科技股份有限公司 Commodity information acquisition method and device for vending machine and vending machine
CN108596659A (en) * 2018-04-16 2018-09-28 上海小蚁科技有限公司 The forming method and device, storage medium, terminal of objective group's portrait
CN108921034A (en) * 2018-06-05 2018-11-30 北京市商汤科技开发有限公司 Face matching process and device, storage medium
CN108898067B (en) * 2018-06-06 2021-04-30 北京京东尚科信息技术有限公司 Method and device for determining association degree of person and object and computer-readable storage medium
CN110678904A (en) * 2018-06-22 2020-01-10 深圳市大疆创新科技有限公司 Beauty treatment method and device, unmanned aerial vehicle and handheld platform
CN108830251A (en) * 2018-06-25 2018-11-16 北京旷视科技有限公司 Information correlation method, device and system
CN108961005A (en) * 2018-07-06 2018-12-07 北京旷视科技有限公司 Information-pushing method, device, electronic equipment and medium
CN108985835A (en) * 2018-07-09 2018-12-11 上海小蚁科技有限公司 Target group determines method and device, storage medium, terminal
CN109102329A (en) * 2018-07-27 2018-12-28 索信市场咨询(北京)有限公司 A kind of data sampling and processing and analysis application method and device
CN109309877B (en) * 2018-08-10 2019-05-10 上海极链网络科技有限公司 Video file reads analysis system
CN109214860A (en) * 2018-08-21 2019-01-15 北京深瞐科技有限公司 Customer-action analysis method and device thereof, computer-readable medium
CN109523344A (en) * 2018-10-16 2019-03-26 深圳壹账通智能科技有限公司 Product information recommended method, device, computer equipment and storage medium
CN111079477A (en) * 2018-10-19 2020-04-28 北京奇虎科技有限公司 Monitoring analysis method and monitoring analysis system
CN109359244B (en) * 2018-10-30 2021-07-20 中国科学院计算技术研究所 Personalized information recommendation method and device
CN109598578A (en) * 2018-11-09 2019-04-09 深圳壹账通智能科技有限公司 The method for pushing and device of business object data, storage medium, computer equipment
CN109816441B (en) * 2018-12-29 2021-05-11 江苏云天励飞技术有限公司 Policy pushing method, system and related device
CN110110688B (en) * 2019-05-15 2021-10-22 联想(北京)有限公司 Information analysis method and system
CN110348839A (en) * 2019-05-31 2019-10-18 口碑(上海)信息技术有限公司 Monitor processing method, the apparatus and system of device status data
CN110223482B (en) * 2019-06-20 2021-08-31 北京百度网讯科技有限公司 Alarm method and device for unmanned vehicle
CN110288400A (en) * 2019-06-25 2019-09-27 联想(北京)有限公司 Information processing method, information processing unit and information processing system
CN112208475B (en) * 2019-07-09 2023-02-03 奥迪股份公司 Safety protection system for vehicle occupants, vehicle and corresponding method and medium
CN112347808A (en) * 2019-08-07 2021-02-09 中国电信股份有限公司 Method, device and system for identifying characteristic behaviors of target object
CN110515525B (en) * 2019-08-30 2021-07-23 佳都科技集团股份有限公司 Visualized data processing method, device, equipment and storage medium
CN112584091B (en) * 2019-09-29 2022-04-26 杭州海康威视数字技术股份有限公司 Alarm information generation method, alarm information analysis method, system and device
CN111325151A (en) * 2020-02-20 2020-06-23 京东数字科技控股有限公司 Display priority obtaining method, device, terminal and computer readable medium
CN113326715B (en) * 2020-02-28 2022-06-10 魔门塔(苏州)科技有限公司 Target association method and device
CN111601063B (en) * 2020-04-29 2021-12-14 维沃移动通信有限公司 Video processing method and electronic equipment
CN111815362A (en) * 2020-07-13 2020-10-23 黑龙江东方学院 University student internet consumption data acquisition system based on big data
CN112035705A (en) * 2020-08-31 2020-12-04 北京市商汤科技开发有限公司 Label generation method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164450A (en) * 2011-12-15 2013-06-19 腾讯科技(深圳)有限公司 Method and device for pushing information to target user
CN103491110A (en) * 2012-06-11 2014-01-01 上海博路信息技术有限公司 Information system based on mobile dynamic data engine

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8775452B2 (en) * 2006-09-17 2014-07-08 Nokia Corporation Method, apparatus and computer program product for providing standard real world to virtual world links
CN102376061B (en) * 2011-08-26 2015-04-22 浙江工业大学 Omni-directional vision-based consumer purchase behavior analysis device
CN103745223B (en) * 2013-12-11 2017-11-21 深圳先进技术研究院 A kind of method for detecting human face and device
CN104750711A (en) * 2013-12-27 2015-07-01 珠海金山办公软件有限公司 Document push reminding method and document push reminding device
CN104881642B (en) * 2015-05-22 2018-10-26 海信集团有限公司 A kind of content delivery method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164450A (en) * 2011-12-15 2013-06-19 腾讯科技(深圳)有限公司 Method and device for pushing information to target user
CN103491110A (en) * 2012-06-11 2014-01-01 上海博路信息技术有限公司 Information system based on mobile dynamic data engine

Also Published As

Publication number Publication date
CN106776619A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106776619B (en) Method and device for determining attribute information of target object
CN107909443B (en) Information pushing method, device and system
US11734725B2 (en) Information sending method, apparatus and system, and computer-readable storage medium
EP3267362B1 (en) Machine learning image processing
US11501514B2 (en) Universal object recognition
Liciotti et al. Person re-identification dataset with rgb-d camera in a top-view configuration
US20200065324A1 (en) Image search device and image search method
Lu et al. A Delaunay-based temporal coding model for micro-expression recognition
CN105718873B (en) Stream of people's analysis method based on binocular vision
Zhang et al. Toward new retail: A benchmark dataset for smart unmanned vending machines
US20200334454A1 (en) People stream analysis method, people stream analysis apparatus, and people stream analysis system
CN107305557A (en) Content recommendation method and device
CN107871111B (en) Behavior analysis method and system
KR101835333B1 (en) Method for providing face recognition service in order to find out aging point
US20140078174A1 (en) Augmented reality creation and consumption
Majd et al. Impact of machine learning on improvement of user experience in museums
AU2017231602A1 (en) Method and system for visitor tracking at a POS area
CN111767420A (en) Method and device for generating clothing matching data
US20230111437A1 (en) System and method for content recognition and data categorization
US20210117987A1 (en) Fraud estimation system, fraud estimation method and program
KR102323861B1 (en) System for selling clothing online
CN108875501B (en) Human body attribute identification method, device, system and storage medium
CN112131477A (en) Library book recommendation system and method based on user portrait
KR102522989B1 (en) Apparatus and method for providing information related to product in multimedia contents
CN111447260A (en) Information pushing and information publishing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant