WO2020240851A1 - System, method, or program - Google Patents

System, method, or program Download PDF

Info

Publication number
WO2020240851A1
WO2020240851A1 PCT/JP2019/021811 JP2019021811W WO2020240851A1 WO 2020240851 A1 WO2020240851 A1 WO 2020240851A1 JP 2019021811 W JP2019021811 W JP 2019021811W WO 2020240851 A1 WO2020240851 A1 WO 2020240851A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
target
partial
unit
Prior art date
Application number
PCT/JP2019/021811
Other languages
French (fr)
Japanese (ja)
Inventor
佐々木 雄一
ダニエルズ・アレックス・エドワード
聞浩 周
山本 正晃
ジニト バット
路威 重松
Original Assignee
ニューラルポケット株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ニューラルポケット株式会社 filed Critical ニューラルポケット株式会社
Priority to PCT/JP2019/021811 priority Critical patent/WO2020240851A1/en
Priority to JP2019529956A priority patent/JPWO2020240851A1/en
Publication of WO2020240851A1 publication Critical patent/WO2020240851A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images

Definitions

  • the technology disclosed in this application relates to an information processing system, an information processing device, a server device, a program, or a method.
  • JP-A-2019-48365 Japanese Unexamined Patent Publication No. 2012-164255 Japanese Unexamined Patent Publication No. 2013-239011 JP-A-2010-118862 JP-A-2009-110185
  • Patent Documents 1 and 2 only improve the accuracy with respect to a part of the subject. Further, the invention according to Patent Document 3 is limited to improvement of clustering itself. Further, the invention according to Patent Document 4 can only move the background. Further, the invention according to Patent Document 5 points out a problem of using a camera, but requires a laser. Moreover, the invention according to Non-Patent Document 1 has not reached a concrete invention. Therefore, various embodiments of the present invention provide an information processing system, an information processing device, a server device, a program, or a method in order to solve the above problems.
  • the first system of one embodiment is An extraction unit that extracts a part of information in the target image related to a moving part in a moving image including the target image from the target image.
  • a generation unit that generates target information related to the partial information based on the extracted partial information.
  • An extraction unit that extracts some information from the target image
  • a generation unit that generates target information related to the partial information based on the extracted image related to the partial information.
  • the amount of image information of a part of the background region is smaller than the amount of image information of the portion corresponding to the part in the target image. It can be a system.
  • the generation unit generates information on the target related to the image related to the partial information by applying the image related to the partial information to the machine learning unit in which the relationship between the image and the target is machine-learned.
  • the first system The first system.
  • the third system of one embodiment is The extraction unit extracts a part of the information by using the movement of the target in the moving image.
  • the fourth system of one embodiment is The extraction unit extracts a part of the information by using the difference between one image and the other image among the plurality of images constituting the moving image.
  • the fifth system of one embodiment is The partial information includes a part corresponding to the movement of the target in the moving image.
  • the sixth system of one embodiment is The generator A transmitter that transmits an image related to the partial information, An acquisition unit that acquires target information corresponding to an image related to the transmitted partial information, and an acquisition unit.
  • the seventh system of one embodiment is
  • the generation unit includes a machine learning unit that machine-learns the relationship between an image and an object. By applying the image related to the partial information to the machine learning, the target information related to the image related to the partial information is generated.
  • the system of any one of the first to sixth.
  • the eighth system of one embodiment is The partial information is a partial image of the target image.
  • the ninth system of one embodiment is The partial information is information relating to feature points of a partial image of the target image.
  • the tenth system of one embodiment is The partial information is information related to background subtraction of the target image.
  • the eleventh system of one embodiment is The partial information is an image obtained by binarizing the background difference of the target image.
  • the twelfth system of one embodiment is The system An estimation unit that estimates the number of targets based on predetermined rules for the cluster related to the partial information. To prepare The system of any one of the first to eleventh.
  • the thirteenth system of one embodiment is The system further includes a statistical processing unit that generates statistical information.
  • the system of any one of 1st to 12th.
  • the fourteenth method of one embodiment is The computer A step of extracting a part of a moving part of a moving image including the target image from the target image, and A step of generating target information related to the part of the image based on the extracted part of the image, and How to do.
  • the fifteenth program of one embodiment A program for operating a computer as any one of the first to thirteenth systems.
  • the sixteenth system of one embodiment A clustering unit that generates clusters based on corresponding feature points in at least two images of a plurality of images constituting a moving image, and a clustering unit.
  • An estimation unit that estimates the number of objects in the cluster based on predetermined rules, System with.
  • the seventeenth system of one embodiment The predetermined rule is associated with the camera from which the image was acquired.
  • the eighteenth system of one embodiment The extraction unit is used in the first target image and the second target image constituting the moving image.
  • the system of any one of 1 to 17 comprising.
  • the nineteenth system of one embodiment includes a statistical processing unit that performs statistical processing using the position related to the target.
  • the system of any one of 1st to 18th.
  • the twentieth program of one embodiment is A program for operating a computer as any one of the first to nineteen systems.
  • the data obtained from the image can be utilized more appropriately.
  • FIG. 1 is a block diagram showing a specific example of the function of the information processing apparatus according to the embodiment.
  • FIG. 2 is a diagram schematically showing the extraction of information according to one embodiment.
  • FIG. 3 is an example schematically showing the data possessed by the system according to the embodiment.
  • FIG. 4 is a diagram schematically showing the extraction of information by the system according to one embodiment.
  • FIG. 5 is an example schematically showing the data possessed by the system according to the embodiment.
  • FIG. 6 is an example schematically showing the relationship of information processed in the system according to one embodiment.
  • FIG. 7 is an example for explaining the number of clusters according to one embodiment.
  • FIG. 8 is an example of a screen displayed by the system according to the embodiment.
  • FIG. 9 is a diagram illustrating an example of a boundary handled by the system according to an embodiment.
  • FIG. 10 is an example of a screen displayed by the system according to the embodiment.
  • FIG. 11 is an example of a screen displayed by the system according to the embodiment.
  • FIG. 12 is an example of a screen displayed by the system according to the embodiment.
  • FIG. 13 is an example schematically showing the data possessed by the system according to the embodiment.
  • FIG. 14 is a block diagram showing an overall picture including the configuration of the system according to one embodiment.
  • FIG. 15 is an example schematically showing the facility corresponding to the system according to the embodiment.
  • FIG. 16 is an example schematically showing the data possessed by the system according to the embodiment.
  • FIG. 17 is an example of a screen displayed by the system according to the embodiment.
  • FIG. 18 is an example of a flow processed by the system according to one embodiment.
  • FIG. 19 is an example of a flow processed by the system according to the embodiment.
  • FIG. 20 is an example of a flow processed by the system according to one embodiment.
  • FIG. 1 is a block diagram showing a specific example of the function related to the system of this example.
  • An example system consists of a partial information extraction unit, a partial information relationship generation unit, a position identification unit, a target number estimation unit, an information generation unit, a boundary unit, a statistical processing unit, and a tracking unit. It may have some or all. All of these functions are not essential functions in the system, and even if they are a part, they have technical significance corresponding to each function, and even in their combinations, depending on each combination. It has technical significance.
  • the partial information extraction unit has a function of extracting partial information from the target image.
  • the target image may be one of a plurality of images constituting the moving image.
  • the partial information may relate to a moving part in the moving image including the target image, and includes, for example, a part corresponding to the movement of the target in the moving image and a part corresponding to the movement of the target in the moving image. It may be a thing, a part of a thing including a part corresponding to the movement of the object in the moving image, and the like. Further, the partial information may be a part of the target image or may be a feature point related to the target related to the target image.
  • the area of the target image may be a part of the information, the image information of the target image may be a part of the information, or a combination thereof may be used as the partial information.
  • the image information is a part of the information, for example, if the image related to the part of the information is a bitmap image, the number of pixels may be reduced, or the number of bits of each pixel may be reduced. It may be made to. Further, if the image related to some information is a vector image, it may be a part of a constituent figure.
  • the amount of image information may also be referred to as the amount of image information. Examples of the amount of image information include the number of pixels, the number of bits of each pixel, the number of colors, the number of figures, etc., as long as it indicates the amount of information in the image. , Anything can be used.
  • the partial information extraction unit of one example may have a partial image extraction unit and / or a feature point extraction unit in order to extract partial information.
  • the partial image extraction unit has a function of extracting a part of an image from a target image.
  • the difference portion of one example may extract a part of the images that are the differences between the images due to the difference between one image and the other images.
  • the part of the image may be a part of the one image or a part of the other image.
  • the one image and the other image may be the next image in the time series of the moving image, or after a certain period of time such as several microseconds to several seconds has elapsed between the one image and the other image in the moving image. It may be a thing.
  • the partial image extraction unit may extract one or a plurality of partial images from the target image.
  • the image pickup device is a traffic camera that captures images on the road through which vehicles, people, animals (including pets), etc. (hereinafter, also referred to as “vehicles, etc.") pass, one image and others
  • the difference between the images may be a vehicle or the like related to a portion of the moving image that moves due to the time difference between the one image and the other image.
  • the difference between one image and another image is related to a moving part in a moving image due to the time difference between the one image and another image.
  • Users, managers, service providers, pets, etc. hereinafter, may be referred to as "users, etc." in the facility.
  • either one of the above-mentioned images or the other image may be a background image.
  • a background image For example, in an image captured by a traffic camera that captures a road on which a vehicle or the like passes, an image captured during a time when the vehicle or the like is absent may be used as a background image, and an image captured by a camera in a facility described later.
  • an image taken in a state where a user or the like is absent in the facility may be used as a background image.
  • the image obtained by deleting these images may be used as the background image.
  • the feature point extraction unit has a function of extracting feature points in a part of the target image from the target image.
  • the feature point extraction unit of one example may extract feature points by the difference between the feature points related to one image and the feature points related to another image in a moving image composed of a plurality of images.
  • the feature point may be a part of the one image or a part of the other image. The relationship between one image and the other image is the same as described above.
  • the feature point extraction unit may extract one or a plurality of feature points from one target image.
  • the feature point may indicate a part of the object in the image.
  • the target is a creature such as a person or a pet, it may be a part or all of the head, ears, eyes, nose, mouth, shoulders, elbows, hands, waist, knees, ankles, etc., and the target is a vehicle. If there is, it may be a part or all of bumpers, wheels, roofs, pillars, side schillers, vehicle contours, and so on. All of these are examples, and it is not necessary for each feature point to be specifically linked to a specific point of a person or vehicle, and the point obtained as a feature point by the method used is a feature. It can be a point.
  • the feature point identification unit may specify other feature points in the image by a well-known method. When one or more feature points identified from the image are associated with one target A, such feature points may be referred to as "feature points of A".
  • Partial information relation generation unit 12 The partial information relationship generation unit has a function of generating a relationship of partial information in the target image.
  • the partial information relation generation unit of one example includes the first part information in the first target image and the second part in the second target image. It may have a function of generating a relationship with information.
  • the relationship may be the presence or absence of a relationship between the first part information and the second part information, or may be an index of the height of the relationship. For example, it may be that there is a relationship between the first part information and the second part information, that there is no relationship between the first part information and the second part information, or that there is a relationship between the first part information and the first part information.
  • the second part may be highly related to information. Whether or not there is a relationship is the relationship between whether the target related to the first part information in the first target image and the target related to the second part information in the second target image are the same in the real world. It may be.
  • this identity may be that a single object is the same, a plurality of objects may be the same, and in the case of a plurality of objects, the same type of combination may have different types. It may be a combination. In this sense, it may be referred to as "pseudo-identical object" hereafter.
  • pseudo-identical objects include people, vehicles, pets, people to people, people to bicycles, people to pets, automobiles and people in them, vehicles carrying vehicles, etc., in short, an imaging device. It may move together as a certain area in the image of the moving image captured by.
  • the target related to the first part information in the first target image and the target related to the second part information in the second target image are pseudo-identical targets. It may be an index indicating the height.
  • the index may be a numerical value or one of a plurality of ranks given in advance.
  • the relationship may be a specific correspondence relationship between the first part information and the second part information.
  • the correspondence between the 11th place of the 1st part information and the 21st place on the 2nd part, the 12th place of the 1st part information, and the 22nd place on the 2nd part may be a correspondence relationship of.
  • the 11th place, the 12th place, the 21st place, and the 22nd place may be information related to the first part information and the second part information, respectively, and may be a position in an image or a feature point. , Vector, cluster, etc.
  • the partial information relationship generation unit of the example may store the relationship in association with the partial information.
  • the partial information relationship generation unit of one example may store the relationship between the first part information and the second part information.
  • the partial information relation generation unit of the image example can process the image as partial information
  • the partial information relation generation unit of the example is the first part image related to the first part information and the second part. The relationship with the second part image related to the image may be generated.
  • the method in which the partial information relationship generation unit of one example generates the relationship between the first part image in the first target image and the second part image in the second target image is a pattern matching between images. , Comparison of densities between images, closeness of positions between images, etc., various methods may be used to show that the first part image and the second part image correspond to the same object. There is no limit to. Note that the first part image and the second part image are both images of a part of the image captured by the image pickup apparatus, and since the area is small, calculations such as processing for generating the relationship are performed. There is an advantage that the burden of quantity is small.
  • the density between images may be determined by whether or not the difference in the proportion of black or white in the entire image due to the binary image such as the silhouette of the image or the black and white of the image is within a predetermined range.
  • the closeness of the positions of the images for example, the closeness of the position of the partial image 02a in the image 01a and the position of the partial image 02b in the image 01b may be used.
  • FIG. 2 is an example in which the partial information extraction unit of one example schematically shows the extraction of partial information.
  • the images 01a to 01d constituting the moving image captured by the imaging device are captured in chronological order in this order, and 02a to 02d, which are some images in each image, are This is an example in which an image related to a difference is extracted as a part of information in each adjacent image.
  • 02a to 02d of some of these images may be a part of the images extracted as the portion related to the movement.
  • the partial images 02a to 02d may be a part of the images 01a to 01d, respectively.
  • the partial information relationship generation unit of one example generates a relationship for a part of images as partial information.
  • the relationship between 02a and 02b may be generated.
  • the partial information relation generation unit of one example may indicate that both 02a and 02b correspond to the same target for some images by pattern matching and the density of the images.
  • the partial information relation generation unit of one example sets the partial image as the same target in association with one tracking ID. You may remember. FIG.
  • FIG. 3 shows an example in which some image IDs 001 and 003006 are associated and stored with the tracking ID 01 of the tracking ID, and some images IDs 002, 004, 009 are associated and stored with the tracking ID 02 of the tracking ID. This is an example. With such a configuration, it is possible to determine where the target appearing in a part of the image is moving even if it moves in different images constituting the moving image.
  • the partial information relation generation unit of one example when a plurality of partial images as two or more partial information are extracted from the image 01b, the partial information relation generation unit of one example includes a partial image 02a and a partial image.
  • the relationship between the image 02b and the image 02b and the relationship between the part image 02a and the part image 03b may be generated, and then a part of the plurality of images may be generated.
  • a part of the images having a high relationship with the image 02a may be specified. This has the advantage of increasing the possibility that the target can be tracked more appropriately even when the moving speed of the target in the image is high.
  • the partial information relationship generation unit of one example generates relationships in order from the one having the closest proximity, and when the height of the relationship is higher than a predetermined value, , It is not necessary to generate a relationship with other candidate images. In this case, there is an advantage that the amount of calculation is reduced.
  • the partial information relation generation unit of an example of a feature point can process a feature point as partial information
  • the partial information relation generation unit of an example is associated with one or more first feature points related to the first part information.
  • the relationship with one or more second feature points related to the second part image may be generated.
  • the relationship may be a correspondence relationship between the one or more first feature points and the one or more second feature points, or one generated from the first feature point and the second feature point.
  • it may be a plurality of vectors, or may be one or a plurality of clusters generated from the one or a plurality of vectors.
  • the vector specifying part has a function of specifying a vector of feature points.
  • the feature point vector may be specified from the preceding and following images. For example, when the second image is next to the first image, a vector having a feature point A in the first image as a start point and a feature point B corresponding to the feature point A in the second image as an end point is specified. Good.
  • the vector unit determines the feature point B in the second image corresponding to the feature point A with respect to the feature point A in one image, and in the method of specifying the vector, the feature point A in the first image.
  • the feature point B in the second image corresponding to may be determined.
  • a well-known method for specifying a vector may be used as the vector part.
  • a vector whose vector is smaller than a predetermined value may be used as it is or may be excluded.
  • the reason why the vector whose vector is smaller than the predetermined value is excluded is that it can be determined that the movement between the first image and the second image is minute, which has an advantage that the amount of calculation can be reduced.
  • the clustering unit has a function of clustering vectors.
  • the clustering unit may generate clustered clusters. Clustering may be based on the position and length of the vector, the length of the vector, or the position and length of the vector. In addition, a well-known clustering method may be used for the clustering unit.
  • the clustering unit processes clustering using vector positions, such as a clustering method that classifies the positions of vectors larger than a predetermined value into different clusters, the objects in the image have a certain area, so different objects can be used. There is an advantage that it can be presumed to be such a feature point.
  • the clustering unit classifies the length of a vector larger than a predetermined value into the same cluster, or classifies the length of a vector in a predetermined range into the same cluster, and the like.
  • feature points related to vectors having the same velocity can be determined to belong to the same object. For example, when the speeds of the first object and the second object are different by a predetermined value or more, even if the positions of the first object and the second object are close to each other, the vector and the first object related to the first object can be accurately measured from the viewpoint of speed.
  • the vector related to the two objects can be clustered in another classification.
  • the clustering unit uses both the position of the vector and the length of the vector, it has both the above-mentioned advantages, and there is an advantage that they can be combined to perform clustering with higher accuracy.
  • the position of the vector means the position of the feature points in the first image and / or the position in the second image.
  • the position of the feature point will be used.
  • the position of the vector may be the position of the start point of the vector or the position of the end point of the vector.
  • FIG. 5 is an example of a cluster generated by the clustering unit, and is an example of a table stored by the clustering unit.
  • a cluster is generated for one or a plurality of vectors composed of two feature points, and clusters C1 and C2 are attached.
  • the partial relationship generation unit of one example can generate a relationship between one or a plurality of first feature points and one or a plurality of second feature points by using the feature points.
  • the correspondence between the feature points is such different movements.
  • the partial information extraction unit of one example extracts the first partial image from the first image (step 1).
  • the partial information extraction unit of the example extracts the second partial image from the second image (step 2).
  • the partial information relationship generation unit generates the relationship between the first part image and the second part image (step 3).
  • An example system is An extraction unit that extracts the first part information related to the first image and the second part information related to the second image.
  • a generation unit that generates a relationship between the first part information and the second part information, It may be a system including.
  • the system of one example can generate the relationship between the first part information and the second part information, it can be specified even if specific information such as a name related to the part information is specified by using a trained model or the like. There is an advantage that some information can be tracked without doing so.
  • the partial information extraction unit of one example extracts the first partial image from the first image (step 1).
  • the partial information extraction unit of the example extracts the second partial image from the second image (step 2).
  • the partial information relationship generation unit of the example generates a relationship between the first part image related to the first part information and the second part image related to the second part information, and the relationship is generated. If it is determined that the first part image and the second part image indicate pseudo-identical objects, the relationship is stored.
  • the partial information relation generation unit of one example uses a cluster for the feature points related to the first part information and the feature points related to the second part information. Therefore, the relationship between the first part information and the second part information is generated, and it is determined whether or not the objects are pseudo-identical.
  • An example system is An extraction unit that extracts the first part information related to the first image and the second part information related to the second image.
  • a first generation unit that generates a relationship between a first part image related to the first part information and a second part image related to the second part information. After the processing of the first generation unit, the first part information and the second part are based on the feature points related to the first part information and the feature points related to the second part information.
  • the second generator that generates the relationship with information It may be a system including. Judgment of pseudo-identical objects using feature points compared to comparison between images requires extraction and clustering of feature points, so the comparison between images is prioritized and processed, and the comparison between images is called pseudo-identical objects. When there is an inconvenience in the processing of images such as when the relationship cannot be determined, there is an advantage that the processing amount as a whole can be reduced by performing the processing using a cluster.
  • An example system is An extraction unit that extracts the first part information related to the first image and the second part information related to the second image.
  • a first generation unit that generates a relationship between a first part image related to the first part information and a second part image related to the second part information. After the processing of the first generation unit, the first part information and the second part are based on the feature points related to the first part information and the feature points related to the second part information.
  • the second generator that generates the relationship with information
  • the information generation unit After the processing of the second generation unit, the information generation unit that generates the information related to the first part image and the information related to the second part image by using the trained model.
  • the relationship between the first part information and the second part information is generated based on the generated information related to the first part image and the information related to the second part image.
  • the third generator It may be a system including. If the processing is inconvenient in both the comparison between images and the processing using the cluster, the whole is generated by generating the information of each part of the information using the trained model although the amount of calculation is large. There is an advantage that the relationship can be generated while reducing the amount of processing as.
  • the position specifying unit has a function of specifying a position related to some information.
  • the position specifying unit of one example may specify the position related to the image when the partial information is an image, and may specify the position related to the feature point when the partial information is related to the feature point.
  • the position related to the feature point may include the position of the vector generated based on the feature point and the position related to the cluster generated based on the vector.
  • the position specifying unit of one example may store a part of the image related to some information and the position related to the part of the image in association with each other. Further, the position specifying unit of one example may store the information related to the feature point related to some information and the position related to the feature point in association with each other.
  • the position of a part of the image may be any information as long as it is information indicating the position of a part of the image in the target image.
  • it may be the center of a polygonal image such as a square system including a part of the image, the center of gravity, or the center of the image including a part of the image, the center of gravity, or the like.
  • the position related to the feature point may be specified by various methods using the information related to the feature point.
  • a position generated by using one or more vectors based on the feature points may be specified as a position related to the feature points.
  • the coordinates of the center point of the coordinates related to the one or more vectors, the center of a polygon such as a quadrangle containing the coordinates related to the one or more vectors, the center of gravity, the coordinates, and the like can be mentioned, but are limited to these. Absent.
  • the one or more vectors may be one or more vectors included in the same cluster.
  • the former is centered when, for example, one cluster is composed of two vectors, a vector of (x11, y11) (x12, y12) and a vector of (x21, y21) (x22, y22).
  • the x-coordinate of the point may be the average value of x11, x12, x21, x22
  • the y-coordinate of the center point may be the average value of y11, y12, y21, y22.
  • the average value may be various methods, such as arithmetic mean, geometric mean, harmonic mean, and generalized mean.
  • the latter is, for example, a quadrangle when one cluster is composed of two vectors, a vector of (x11, y11) (x12, y12) and a vector of (x21, y21) (x22, y22).
  • the vertices of may be set as (MINx, MINy), (MINx, MAXy), (MAXx, MINy) (MAXx, MAXy) and the like.
  • MINx is the minimum value of the x-coordinate of the four vertices of the quadrangle
  • MAXx is the maximum value of the x-coordinate of the four vertices of the quadrangle
  • MINy is the four vertices of the quadrangle. It is the minimum value of the y-coordinate related to, and here, MAXy may be the maximum value of the y-coordinate related to the four vertices of the quadrangle.
  • target position can be specified as reflecting the information of each cluster.
  • Target number estimation unit 14 The target number estimation unit has a function of estimating the target number of the target related to the partial information when the partial information is the information related to the feature point.
  • the target number estimation unit in one example uses a rule for determining the number of targets from the number of feature points included in each cluster related to some information (sometimes referred to as "target number estimation rule" in the documents of the present application).
  • the number of targets may be determined.
  • FIG. 7 may be generated as follows. That is, for one or more sample images, feature points in the image are extracted, clustering is performed for the one or more feature points, and one or more feature points included in one cluster are specified. To specify the number of feature points. Since multiple clusters can be detected, the number of feature points included is determined for each of these clusters. Further, there may be a plurality of clusters (for example, 12 clusters including 5 feature points) having the same number of feature points. In such a situation, in this figure, the horizontal axis is the number of feature points included in one cluster, and the vertical axis is the number of clusters including the number of each feature point.
  • 5 on the horizontal axis is plotted on 12 on the vertical axis, which is a plot showing a situation in which 12 feature points are included in one cluster.
  • the portions 5 and 9 and 13 on the horizontal axis are peaks, and the portions 7 and 11 and 15 on the horizontal axis are valleys.
  • the number of feature points included in one cluster is 1 to 7
  • the number of objects is 1 on average
  • the number of feature points included in one cluster is 1.
  • the number of objects is 2 on average
  • the target number may be set for the number.
  • the number of feature points is 5 which is the peak, it is estimated that the number of feature points is 5 for one target, but when the target is 2. It is 9 instead of 10, and when the target is 3, it may be 13 instead of 15. This is because it may be a mere multiple, but for example, the number of feature points may decrease depending on the degree of overlap of the objects.
  • the above-mentioned target number estimation rule may be associated with information related to the camera on which the image is captured.
  • the one target number estimation rule associated with the information related to one camera and the other target number estimation rule associated with the information related to another camera different from the information related to the one camera may be different.
  • the target number estimation rule By setting the target number estimation rule according to the camera, there is an advantage that a precise rule can be set.
  • the information related to the camera on which the image is captured may be the position where the camera to be imaged is installed, the time taken by the camera, the moving method of the camera to be imaged, the specific camera ID, and the like.
  • the position where the camera is installed may be a facility where the camera is installed, a position in the facility where the camera is installed, or the like.
  • the position in the facility may be a position such as a ceiling or a wall. This is because it is considered that there is a certain relationship between the information related to the camera on which the image is captured, the number of feature points, and the number of objects.
  • the delicatessen section is composed of multiple sections and a travel route in which shoppers move between them.
  • the travel route that people take is also limited to certain places.
  • a camera installed in such an environment captures images from a specific direction at a specific tilt. Therefore, when an image of a target person is taken in such a case, the appearance of feature points is also limited. This is because the relationship between the number of objects and the number of feature points is considered to have a certain rule.
  • the target number estimation rule may be associated with the target.
  • the target may be a person, a vehicle, or an animal, but may be more specific. For example, it may be a person belonging to a predetermined group, a vehicle of a predetermined size or vehicle type, an animal belonging to a predetermined group, or the like. These have the advantage that the target number estimation rule can be set based on the target whose detection mode of the feature point may be different because the extraction of the feature point may be different due to the difference in the group to which the person, the vehicle, and the animal belong. There is.
  • the person who belongs to the predetermined group may include a person in a suit, a person in a casual figure, a person in work clothes such as a factory or a construction workshop, and the like.
  • the accuracy of the target number estimation can be further improved by determining the target number estimation rule for the person who wears these specific clothes.
  • the vehicle of a predetermined size and vehicle type may be a general vehicle, a motorcycle, a taxi, an emergency vehicle, a truck, a bus, or the like. This is because they are different in size, and taxis may have different detection of feature points, such as a specific mark on the roof.
  • the target number estimation rule is determined according to the specific vehicle.
  • the accuracy of target number estimation can be further improved by setting.
  • the target number estimation rule may be associated with the clustering method. Since the clusters obtained for the vector based on the feature points may differ depending on the clustering method, there is an advantage that the target number estimation rule can be set based on the clustering method.
  • the target number estimation rule may be associated with a predetermined group.
  • the predetermined group may be the above-mentioned information relating to the camera included in the predetermined group, an object included in the predetermined group, a clustering method included in the predetermined group, and / or a combination thereof. Grouping has the advantage of reducing the burden of establishing target number estimation rules according to individual differences.
  • the target number estimation rule artificially determines the relationship between the number of feature points and the number of targets after a person sees a specific image, and the determined numerical value is input to this system. Therefore, it may be used as a rule for estimating the number of objects, or it may be determined mechanically. In the case of the mechanical determination method, there is an advantage that the human burden is reduced.
  • the target number estimation rule unit may include a configuration for inputting information related to the target number estimation rule and a configuration for acquiring information related to the target number estimation rule. ..
  • the information related to the target number estimation rule is not limited in its mode as long as it is information necessary for determining the target number estimation rule.
  • the number of feature points and the number of targets are directly or indirectly associated with each other. It may be information.
  • FIG. 8 shows an example of a screen for acquiring such information.
  • a configuration for inputting a target number estimation rule and a configuration for acquiring a target number estimation rule may be provided as a boundary of the target number by using a GUI.
  • This method has an advantage that the user can input the target number estimation rule while understanding the relationship between each cluster and the feature score on the screen.
  • the target number estimation rule unit may acquire information on the boundary of the target number and the number of feature points.
  • a histogram is shown here, various graphs such as a bar graph, a line graph, a pie graph, a band graph, and a scatter graph may be used.
  • the input means may also have various modes.
  • a numerical value may be directly input in conjunction with or instead of the above. For example, as in "Relationship between the number of targets and the number of feature points" (02), a numerical value may be input for the relationship between the number of targets and the number of feature points. In this case, the target number estimation unit may be used. Such information may be obtained. In this case, there is an advantage that a specific numerical value can be input.
  • Machine learning can be mentioned.
  • Artificial intelligence technologies include, for example, machine learning technologies such as neural networks, genetic programming, functional logic programming, support vector machines, clustering, regression, classification, Bayesian networks, reinforcement learning, expression learning, decision trees, and k-means clustering. May be used.
  • deep learning deep learning technology
  • This is a technique that enables output even for an unknown input by learning the relationship between input and output using a plurality of layers.
  • supervised and unsupervised methods for training data whichever method may be applied.
  • This system may use such machine learning technology to learn the relationship by inputting the number of one feature point and the number of objects, or the image including the number of feature points and the number of objects. You may learn such a relationship by inputting.
  • the target number estimation rule may be specified by using the number of feature points and the number of targets by a statistical method. If the target number estimation rule is determined before the analysis of the captured moving image, it is not necessary to determine it at the time of analyzing the captured moving image. Therefore, the calculation amount and calculation time are used to determine the target number estimation rule. Even if machine learning is required, it does not affect the amount of calculation or calculation time of video analysis. Therefore, even if the processing of the target number estimation rule requires a large amount of calculation time and calculation amount due to machine learning or the like, there is an advantage that the analysis of the moving image can be executed in real time by this processing explained in the migration.
  • the target number estimation rule artificially or mechanically determined by the above-mentioned method may be used by various methods.
  • the target number estimation rule may be prepared by a table, a function, or may be used as a machine-learned system.
  • the relationship between the number of feature points obtained by machine learning and the number of objects is such that the number of objects obtained by inputting the number of feature points on a trial basis is made into a table or function as described above. You may be able to.
  • An example system is An extraction unit that extracts one or more first feature points as the first part information related to the first image, and A target number estimation unit that estimates the number of targets related to the first part information by applying the target number estimation rule from the one or a plurality of first feature points. It may be a system including.
  • the target number estimation unit has a function of estimating the number of targets for one cluster by using the above-mentioned target number estimation rule for the number of feature points in the cluster.
  • the target number estimation unit uses the target number estimation rule, the number of feature points is determined by the target number estimation rule regardless of the clustering technique from low-precision clustering technology to high-precision clustering. Since the relationship with the number of targets is known, there is an advantage that the number of targets associated with one cluster can be reasonably specified from the number of feature points in the one cluster.
  • the above-mentioned target number estimation rule has an advantage that the number of target targets associated with one cluster can be reasonably specified from the number of feature points in the one cluster. There is.
  • the target number estimation rule clustering is performed on a plurality of feature points related to one cluster having a target number of 2 based on a vector between images along the flow of a moving image.
  • the clustering technique may result in more than one cluster.
  • the number of targets for each cluster may be determined based on the target number estimation rule based on the number of feature points related to each cluster.
  • the target number estimation unit may use any clustering technology, and therefore, a smartphone having a clock number of the arithmetic unit that processes the clustering technology used by the target number estimation unit or the like.
  • a calculation unit that uses a CPU with a relatively small number of clocks, such as a lightweight notebook computer, or a smartphone, lightweight notebook computer, etc. that has a storage capacity used for processing the clustering technology used by the target number estimation unit.
  • the storage unit has a memory with a relatively small amount of storage such as, and / or when the clustering technique used by the target number estimation unit needs to be performed at the same time as other image processing or other calculation with a high processing load. There is an advantage that it can be applied even in such cases.
  • the system of one example is provided with the target number estimation unit of one example, there is an advantage that the number of targets can be specified even when the cluster is associated with a plurality of targets. In particular, there is an advantage that such a movement can be identified even when one of a plurality of objects starts a different movement.
  • An example system is An extraction unit that extracts some information from the image captured by the image pickup device, For the cluster related to the partial information, the target number estimation unit that determines the number of targets related to the partial information, and It may be a system including.
  • Such a target number estimation unit can estimate the number of targets regardless of whether or not the system of one example has an information generation unit capable of generating information by a function with a large amount of calculation such as a learned model described later.
  • the information generation unit has a function of generating information related to some information.
  • the information generation unit may generate target information related to some information by various methods.
  • the information generation unit of one example may be configured so that an image related to a part of the information can be acquired and the target information can be generated by machine learning.
  • the information related to the partial information may be the name or attribute of the object related to the partial information.
  • it may be a name, type, size, attribute of a vehicle or the like.
  • it may be information indicating a person such as gender, age, age group, height, physique, facial expression, etc. from a specific viewpoint, or information on fashion items.
  • the machine learning function of the information generation unit in the example may be for machine learning the relationship between the image or a part of the image and the target information.
  • the machine-learned information generation unit may generate target information corresponding to the image or a part of the image with respect to the image or a part of the image.
  • Artificial intelligence technologies include, for example, machine learning technologies such as neural networks, genetic programming, functional logic programming, support vector machines, clustering, regression, classification, Bayesian networks, reinforcement learning, expression learning, decision trees, and k-means clustering. May be used. In the following, an example using a neural network will be used, but the present invention is not necessarily limited to the neural network.
  • deep learning deep learning technology
  • This is a technology that enables output even for unknown inputs by learning the relationship between inputs and outputs using a plurality of layers.
  • supervised and unsupervised methods either of which may be applied.
  • Machine learning in deep learning technology often requires a large number of calculations in learning the relationship between input and output. Therefore, in particular, when a part of the entire image captured by the image pickup device is applied to the learned information generation unit, there is an advantage that the amount of calculation can be suppressed.
  • a part of the image in the target image related to the moving part in the moving image captured by the imaging device is compared with the image related to the non-moving part in the moving image, and the target (for example, a person, a pet, a vehicle, etc.) ) Since there are many areas that overlap with the captured image, when a part of the image in the target image related to the moving part of the moving image captured by the imaging device is applied to the learned information generation unit, There is an advantage that the target information can be easily generated with a small amount of calculation. In particular, there is an advantage that the target information can be generated by applying the trained model not only by the GPU suitable for the neural network but also by the CPU of a notebook computer, a smartphone, or the like.
  • one example system is An extraction unit that extracts some information from the target image, A generation unit that generates target information related to the partial information based on the extracted image related to the partial information.
  • the image related to the partial information may be a system in which the amount of partial image information is smaller than the amount of image information of the portion corresponding to the partial in the target image. For example, in the image related to some information, the amount of image information is reduced in the corresponding portion as compared with the original target image. Compared with the original target image captured, the amount of image information is reduced at the corresponding portion, so that there is an advantage that the burden of application to the trained model can be reduced.
  • the amount of image information of a part of the background region is smaller than the amount of image information of the portion corresponding to the part in the target image. It can be a system.
  • the amount of image information is reduced in the corresponding portion in the background area as compared with the original target image. Since the amount of image information is reduced in the corresponding part of the background area where the required information seems to be less than the original target image captured, learning is performed without significantly adversely affecting the accuracy of the information. There is an advantage that the burden of application to the finished model can be reduced.
  • the image related to the partial information related to the system of one example may be a system in which the amount of partial image information is smaller than the amount of other image information other than the partial information.
  • the image related to some information there is an example in which the amount of image information is partially reduced as compared with other images other than the above part.
  • the image related to the partial information related to the system of one example may be a system in which the amount of image information of a part of the background area is smaller than the amount of image information other than the background area.
  • the image related to some information there is an example in which a part of the background area has a smaller amount of image information than other than the background area.
  • the image related to the partial information related to the system of one example may be a single image information amount per unit of at least a part of the background area.
  • the single image information amount may be, for example, a single color.
  • the single color may be, for example, 0, 255, etc. in the RGB model.
  • the amount of image information related to the target other than the background area may have the same amount of information as the amount of image information related to the target in the original target image captured.
  • the trained model can be applied to generate the target color and target type information.
  • the background area may be an image calculated from the difference between at least two images from a plurality of images constituting the moving image, or the background image captured by the imaging device and the background image captured by the imaging device. It may be a difference from the image.
  • the background image may be an image captured before the difference processing is performed, and may be, for example, an image captured in advance.
  • the background area may be, for example, an image captured by a traffic camera other than a vehicle or the like, or an image captured in a commercial facility other than a user.
  • the background area is a place other than the target for generating information such as vehicles. Therefore, when the trained model is applied by reducing the amount of information about the background area, the information of the background area is applied.
  • the number of images used is reduced to about 1/100, so that it is possible to calculate with an arithmetic unit other than the arithmetic unit conventionally required for applying the trained model. ..
  • the amount of image information in the background region is small and the neural network is applied as a trained model, there is an advantage that the structure of the neural network can be simplified.
  • the image related to the partial information related to the system of one example may have a single image information amount per unit other than the background area. Except for the background area, for example, if the image is captured by a traffic camera, a vehicle or the like is presumed, and if the image is captured in a commercial facility, the user is presumed.
  • the single image information amount may be, for example, a single color.
  • the single color may be, for example, 0, 255, etc. in the RGB model.
  • the color of the background area is set to 0, the color other than the background area is set to 255, and the trained model is applied to the image related to the binarized partial information, thereby forming the background and the inside of the object.
  • the shadow of the target may be included as one of the background areas, may be included in a target other than the background area, or may have a single image information amount as described above.
  • the image related to the partial information related to the system of one example is converted into ternary partial information such as 0 for the color of the background area, 255 for the target, 128 for the shadow of the target, and the like in the RGB model.
  • the trained model may be applied to the image related to the image and the image related to the partially quantified information.
  • the target information can be generated without excessively increasing the calculation load mainly by the information such as the outline and shadow of the target.
  • the degree of the shadow may differ depending on the weather, but there is an advantage that such a difference does not affect the application of the trained model.
  • the trained model may be selected and used in association with the place where the image is taken.
  • the imaging target such as a traffic camera or a camera in a facility is clearly defined as a vehicle or a user. Therefore, a trained model in which the relationship between the vehicle or the like and the name is machine-learned may be applied to the target image captured by the traffic camera.
  • a trained model in which the relationship between the user or the like and the name is machine-learned may be applied to the target image captured by the camera in the facility.
  • the trained model may be stored in association with the installed image pickup device, and the system of one example has a configuration in which the trained model to be applied in association with the image pickup device in which the target image is captured can be selected. May be.
  • the imaging device is divided into a traffic camera, an in-facility camera, or a group according to various uses described later, and the trained model for each group is provided or can be connected to provide information.
  • the trained model may be configured to be selectable based on the imaging device related to the generated partial information.
  • an example information generator includes a trained model for functioning a computer to generate information of interest corresponding to the image or part of the image based on the image or part of the image. Good.
  • Such trained models may be implemented by programs that are part of artificial intelligence software.
  • the above-mentioned function related to machine learning may be a function provided in the information generation unit of the example, or may be inquired from the system of the example in an information processing device outside the system of the example.
  • the information generation unit of an example includes a transmission unit that transmits an image or a part of an image to an external information processing device having a function related to machine learning, and a target corresponding to the image or a part of the image. It may be provided with an acquisition unit for acquiring information. Further, the information generation unit of the example may store such an image or a part of the image in association with the acquired information of the target object.
  • An example system is An extraction unit that extracts some information from the image captured by the image pickup device, An information generation unit that generates target information related to partial information with respect to the image related to the partial information. It may be a system including.
  • Boundary 16 The boundary portion has a function of managing the boundary.
  • the boundary may be any one that divides at least a part of a certain area. What divides the area may be a line segment, an arc, or the like, or a circle, a polygon, or the like. Boundaries may generate closed areas or may not be closed, such as partial lines.
  • the boundary may be set for the image.
  • Boundaries may associate one or more boundaries with an image and may store them.
  • the boundary portion may store the boundary as a function or as a graph consisting of vertices and paths.
  • FIG. 9 shows an example of the boundary.
  • the boundary may be a circle (01), a polygon such as a quadrangle (02), or a boundary (03) that is a part of a closed area together with the edge of the image or the edge of the screen on which the image is displayed. It may be a simple line segment (04) that is not closed as an area, an arc (05), or other non-regular lines. Considering the boundary as the vertices and paths of the graph, the boundary may or may not be a closed path.
  • the boundary may be stored by a so-called graph in the storage unit, and may be a static memory such as a matrix, a list, a structure, or a dynamic memory, and these may be dynamically compiled.
  • the graph may be a directed graph or an undirected graph.
  • the statistical processing unit of one example may associate one direction with respect to one boundary. Since the boundary has a role of detecting the intersection with the position of some information, the direction related to the boundary is information for determining from which side the partial information has moved to the other side with respect to the boundary. May be used as.
  • the boundary has the advantage that it may be possible to detect the movement of some information even if the road is not closed.
  • the image is an image of the sales floor of a store
  • people cannot pass through the places where shelves and things are installed, so if a boundary is set in a place other than such shelves and things, it will be called such a boundary.
  • a boundary is set in a place other than such shelves and things, it will be called such a boundary.
  • the boundary is not a closed road, it is not always the case that the target path is excluded in relation to the image, and the closed road is set to an arbitrary movement point of some information that the user wants to detect. Good.
  • the boundary part may provide a function that allows the user to input the boundary. If the user has such a function that the boundary portion is applied, the user has an advantage that the boundary portion can be input. Further, the boundary portion may have a function of displaying a screen that supports input of the boundary. Having such a function of the boundary has an advantage that the user can understand the information regarding the input of the boundary. In addition, the boundary portion may have a function of acquiring information related to the boundary input by the user. When the boundary portion has such a function, there is an advantage that the boundary portion can use the information related to the boundary input by the user.
  • FIG. 10 is an example of a screen display provided by the boundary portion where the user can input the boundary.
  • the boundaries themselves may be set graphically using a pointing device such as a mouse or pointer. It may be possible to select a plurality of boundaries to set the region (1003), or it may be possible to deselect the boundaries (1007). Boundaries on the screen may be deleted individually (1008) or all may be deleted (1009). Further, the name of the image (1001) and the name of the area (1002) can be displayed. Further, the set boundary is memorable (1005), and the boundary can be set one after another so that the boundary can be set for various screens (1006).
  • system of one example may be a system provided with a specific unit for specifying the position of the target by using the estimated number of targets for one cluster in the image.
  • system of one example is a system provided with a specific unit for specifying the position of the object and a statistical processing unit using the estimated number of objects for one cluster in the image. It may be.
  • the statistical processing unit 17 has a function of generating statistical information.
  • Statistical information is information generated by using some information directly or indirectly.
  • the statistical information itself or the information necessary for generating the statistical information may be stored in a storage device (hereinafter, may be referred to as a "statistical information database").
  • the term database only means a collection of data, and may or may not have a large-scale data storage function, a rapid data access function, and the like. ..
  • Such a statistical information database may be provided by an example system, or may be inquired from an example system in an information processing device outside the example system.
  • An example system may have a communication unit for inquiring information from the statistical information database and an acquisition unit for acquiring information acquired from the statistical information database.
  • Statistical information may be information related to the position related to some information or information related to the boundary. Further, such statistical information may be generated when the relationship based on some information is a pseudo-identical object.
  • the information related to the position related to the partial information may be the movement direction of the position related to the partial information, or may be the movement of the position related to the partial information.
  • the moving direction of the position related to the partial information may generate the moving direction of the position related to the partial information.
  • the moving direction may be specified by a numerical value such as 0 to 360 degrees or 0 to 2 ⁇ , or may be specified by a value associated with a predetermined range such as north, south, east, west, up, down, left, or right. ..
  • the degree of width of the predetermined range may be various. There is an advantage that it is easier for the viewer to understand when the information is displayed even when the information is specified by the predetermined range.
  • the moving direction of the partial information may be generated by the vector of the position related to the image related to the partial information, or may be generated by using the information of the vector of the feature points related to the partial information. Further, the moving direction of the position related to the partial information may be generated by using a plurality of partial information. In this case, an average of vectors based on positions related to a plurality of partial information may be used.
  • An example system may have a statistical processing unit that generates a moving direction from a position related to the first part information and a position related to the second part information.
  • An example system is In the first image and the second image that make up the moving image, A partial information extraction unit that extracts the first part information from the first image and the second part information from the second image.
  • a position specifying unit that specifies a position related to the first part information and a position related to the second part information.
  • a statistical processing unit that generates a moving direction from a position related to the first part information and a position related to the second part information. May be equipped.
  • the system of one example uses the target number estimation function or uses the number of partial information, it is displayed because it can generate information on the number of targets and / or the number of partial information in addition to the moving direction.
  • the viewer has the advantage of being able to understand how many targets the movement is.
  • the system of one example may use the function of the boundary to specify the timing of generation and display of the moving direction. For example, there is an advantage that the viewer can display at a specific position by being configured to generate and display the moving direction at the position where the boundary is set.
  • FIG. 11 is an example of displaying the moving direction of the position related to some information, the display 1101 shows the moving direction to the information, and the display 1102 shows the moving direction to the right.
  • one partial information is used for one image, and a plurality of corresponding images are used for the plurality of images.
  • Partial information may be generated based on each position related to the plurality of partial information. Further, the movement of the position related to the partial information may be generated by using a plurality of partial information in one image.
  • the statistical processing unit of one example may store information in which some information is associated with a position related to some information.
  • An example system may have a statistical processing unit that generates a movement of a part of information from a position related to the first part information and a position related to the second part information.
  • An example system is In the first image and the second image that make up the moving image, A partial information extraction unit that extracts the first part information from the first image and the second part information from the second image.
  • a position specifying unit that specifies a position related to the first part information and a position related to the second part information.
  • a statistical processing unit that generates motion from the position related to the first part information and the position related to the second part information. May be equipped.
  • the system of the example is possible without the function of estimating the number of objects and the boundary in generating the movement of such a part of information.
  • the system of one example uses the target number estimation function or uses the number of partial information, it can generate information on the number of targets and / or the number of partial information in addition to the movement, so that it is displayed.
  • the viewer has the advantage of being able to understand how many objects are moving.
  • the example system may also use the boundary function to specify the timing of motion generation and display. For example, by being configured to generate and display a movement at a position where a boundary is set, there is an advantage that the viewer can display at a specific position.
  • FIG. 12 is an example of displaying the movement of the position related to the partial information
  • the display 1201 is an example showing the movement of the curve of the partial information
  • the display 1202 shows the linear movement of the partial information. This is an example.
  • system of one example may display the moving direction and movement of the position related to some information.
  • the viewer has the advantage of being able to understand both past movements and current movement directions.
  • the information related to the boundary may be information generated in relation to the boundary, for example, the number related to the position of the partial information intersecting the one boundary or the partial information intersecting the one boundary.
  • the number of such objects may be the number used, and so on.
  • the number of the partial information intersecting the one boundary is, for example, the total number of the partial information intersecting the one boundary, the total number of the partial information in the predetermined period, and the predetermined period. It may be an average value of the number of the partial information in a unit, a density obtained by dividing the total number by a numerical value related to the boundary, or the like.
  • the numerical value related to the boundary may be, for example, the area associated with the boundary or the rent associated with the boundary. These may be the area or rent of the closed area if it is a closed boundary, or the virtual area or rent associated with that boundary if it is not a closed boundary. If one system has such a configuration, it has the advantage of being able to generate statistical information based on the location of some information.
  • the number used for the number of objects related to the partial information intersecting the one boundary is, for example, the total number of the objects related to the partial information intersecting the one boundary, the partial information in a predetermined period. It may be the total number of such objects, the average value of the number of objects related to the partial information in a predetermined period unit, the density obtained by dividing the total number by the numerical value related to the boundary, or the like.
  • boundary location when one boundary is associated with one specific place, one specific product location, one specific advertisement location (hereinafter, also referred to as "boundary location, etc.”), etc. ,
  • the information related to the boundary is the total number of some information that intersects the one boundary, etc. associated with the specific boundary place, etc., the total number of the predetermined period, the average value in the predetermined period unit, and the like. You can.
  • an example system When an example system generates such information, it has the advantage of being able to generate statistical information that includes elements such as boundary locations.
  • the relationship between one boundary and the boundary place or the like may be related and stored in the storage device (hereinafter, may be referred to as a “boundary place database”).
  • the term database only means a collection of data, and may or may not have a large-scale data storage function, a rapid data access function, and the like. ..
  • a boundary location database may be provided by an example system, or may be inquired from an example system in an information processing device outside the example system.
  • An example system may have a communication unit for inquiring information from the boundary location database and an acquisition unit for acquiring information acquired from the boundary location database.
  • the information related to the boundary may include a store as one specific place.
  • the information related to the boundary may include the number of visits when a person who is the target of some information visits one store.
  • the information related to the boundary may be various rankings as described later.
  • the statistical processing unit of one example generates statistical information by using a partial information relation generation unit, a position identification unit, a target number estimation unit, an information generation unit, a boundary unit, and / or a tracking unit described later. Good.
  • Tracking unit 18 An example system may have a tracking unit.
  • the tracking unit has only the function of tracking the movement of some information in the moving image captured in one imaging device, from the moving image captured by one imaging device to the moving image captured by another imaging device. It may have only the function of tracking the movement of the object, or both of these functions.
  • the tracking unit may track the movement of the position related to some information.
  • the tracking unit may have a function of determining whether or not the target related to some information is a pseudo-identical target.
  • the tracking unit may determine whether or not it is such a pseudo-identical object by using the feature points related to the cluster related to some information, the positional relationship of the imaging range of the imaging apparatus, and / or the time information of such imaging. ..
  • the tracking unit may determine whether or not they are pseudo-identical objects by using the above-mentioned partial information relation generation unit.
  • the tracking unit may track the target related to the partial information when it is determined that the target related to the partial information is the same.
  • the positional relationship of one or a plurality of feature points may be used to determine whether the target is a pseudo-identical target. This is because the positional relationship of one or more feature points related to one object changes between the position in the image related to the moving image by one imaging device and the position in the image related to the moving image by another imaging device. Based on estimates that there is no or the change is constant. The constantness of change can occur due to the difference between the installation angle and position of one imaging device and the installation angle and position of another imaging device, but a rule of such positional relationship is created in advance. As a result, it may be used as information for determining whether or not the object is the same pseudo-identical object. If it is difficult to create a rule of such a positional relationship, the positional relationship of the feature points may not be used.
  • the tracking unit uses the positional relationship of the imaging range of the imaging device, for example, the imaging range of one imaging device and the imaging range of another imaging device are adjacent to each other or partially overlap. There may be a relationship.
  • the imaging range of one imaging device and the imaging range of another imaging device are adjacent to each other, if the feature points move as they are, the feature points in the imaging range of one imaging device are other. Since it is estimated that the object moves to the imaging range of the imaging device of the above, it is estimated that the object within the imaging range of one imaging device has moved to the object within the imaging range of the other imaging device, thereby simulating the object. It may be determined whether they are the same object.
  • the feature points in the overlapping range are the imaging range of one imaging device and the other. Since it exists at the same time as the imaging range of the imaging device of the above, it may be determined whether or not the object is a pseudo-identical object by determining the identity of the feature points by using such information.
  • the tracking unit uses the time information of imaging, when the imaging range of one imaging device and the imaging range of another imaging device are adjacent to each other, the feature points move as they are. If so, it is presumed that the feature points in the imaging range of one imaging device move to the imaging range of another imaging device.
  • the time information By using the time information, such a relationship is estimated and the feature The identity of the points may be determined to determine whether the object is a pseudo-identical object.
  • the tracking unit of the system of one example may detect the movement of the target across each moving image by a plurality of imaging devices, thereby specifying the movement of the target.
  • the tracking unit may store information relating the position and time of the target based on the information on the movement of the target.
  • FIG. 13 is an example of such a memory.
  • the tracking unit may have a function of identifying the position of the target in a specific period from a specific time to a specific other time by using such information, and obtains the position of the target. May have the function of
  • the tracking unit may have a function of specifying a specific position for a specific time zone and a target related to a part of information by using the boundary location database. This has the advantage that, for example, it is possible to identify where an object was at a certain time zone.
  • the tracking unit may have a function of specifying one or more partial information or an object related to the partial information for a specific time zone and a specific place by using the boundary location database. This has the advantage that, for example, it is possible to identify what kind of object was in a specific time zone and a specific place.
  • the tracking unit may have a function of specifying a specific time zone for a target related to a specific place and a part of information by using the boundary place database. This has the advantage that, for example, it is possible to identify when a specific person was in a specific place.
  • FIG. 20 shows the processing flow of an example system using each of the above functions.
  • the order of generating each information may be appropriately changed as long as the information necessary for generating the information is prepared.
  • the processing of the flow 2002 to 2008 is shown after the first part information and the second part information are generated, but this is an example, and the position is determined after the generation of the first part information.
  • Information that is a prerequisite for processing each function such as identifying and then generating the second part information, and managing the boundary part (2006) before generating the first part information (2001), is available.
  • the processing flow may be changed so that they can be processed.
  • An example system may include the configuration illustrated in FIG. In this figure, the system of one example may be composed of the information processing apparatus 1410.
  • the information processing device 1401 may include an arithmetic unit 1411, a storage device 1412, a bus 1415, and a communication device 1416.
  • the arithmetic unit 1411 may have an arithmetic function, may be a CPU, a GPU, or the like, and may have a function capable of executing various instructions. Further, it may have a storage function such as a cache.
  • the storage device 1412 may be a storage device having a storage function, may be a primary to tertiary storage device, or may be a volatile / non-volatile memory.
  • the storage device may store a program that can be executed by software in the documents of the present application, and information scheduled to be processed or information after processing by an instruction related to the program. Further, the memory may be a temporary memory or a permanent memory.
  • the bus 1415 may have a function of transmitting information in the information processing device. Further, the communication device 1416 may have a function for transmitting information via a network.
  • the information processing device 1401 is provided with one arithmetic unit and one storage device, but a plurality of arithmetic units may be provided, and a plurality of storage devices may also be provided. Further, the arithmetic unit may be various types of arithmetic units, and the storage device may also be various types of storage devices. Further, the information processing device 1401 itself may be composed of one information processing device, or may be composed of a plurality of information processing devices. Further, the information processing device 1401 may be on the cloud or may be a server or the like.
  • An example system may include an AI device 1418.
  • the AI device 1418 is shown in this figure so as to be connectable to the information processing device 1410 via the network 1417, the AI device 1418 may be inside the information processing device 1410 or may be directly connected to the information processing device 1410. .. In this case, since it is not necessary to go through the network 1417, there is an advantage that information can be communicated quickly.
  • An example system may include one or more imaging devices 1401a to c in addition to the information processing device 1401.
  • the imaging device will be described later.
  • the system of one example may include a display device 1414 and / or an input device 1413.
  • the display device is shown as a display device in the information processing device 1410, but the display device may be an information processing device independent of the information processing device 1410. Further, there may be one or more information processing devices related to such a display device, and the information processing device may be via the network 1417. Further, the information processing device related to the display device may be a terminal, may be used by an administrator who manages the system of this example, or may be used by a user who uses the system of this example. .. Further, an input device 1413 may be provided corresponding to such a display device. The input device may be a touch panel integrated with the display device, or may be a separate device.
  • the input device may be a keyboard, pointer, mouse, or the like.
  • An example system may or may not include these input devices and / or display devices.
  • the system of one example is a transmission unit capable of transmitting such information to the information processing device related to the display device so that the display can be performed. May be equipped.
  • the system of the example includes an acquisition unit capable of acquiring the information from the information processing device related to the input so that the input can be performed by the input device. Good.
  • FIG. 1 assumes, for example, one floor of a department store, but shopping malls, stores, offices, financial institutions, medical facilities, accommodation facilities, government offices, educational facilities, cultural facilities, sports facilities, factories, aircraft facilities. Needless to say, it is not limited to these, such as vehicle facilities and their affiliated facilities, and it is possible to think in the same way both inside and outside the facility as long as it is a place where one or more imaging devices are installed.
  • Examples of the outside of the facility include outside the above-mentioned facility, a farm, a mine, a mountain field, or an artificial object constructed in these, and an imaging device described later may be installed in these places.
  • the imaging device described later may be installed in a place inside or outside the above-mentioned facility, but it does not have to be installed in a specific place. For example, by installing it in a drone in flight in the air, it may be possible to take an image from a viewpoint different from that of the facility.
  • an imaging device (1A, 1B, 1C, 1D), an imaging range (2A, 2B, 2C, 2D) of the imaging device, and a person (3 to 5) are schematically illustrated. There is.
  • the image pickup device may be various types of image pickup devices. For example, it does not matter what kind of camera it is, such as a digital camera or an analog camera. The purpose of imaging may also be crime prevention, monitoring of visitors and employees, marketing, recording, and the like.
  • the camera may be a stereo camera, a panoramic camera, a construction camera, an underwater camera, or the like.
  • the lens may be an ordinary lens, or a wide-angle lens such as a fisheye lens, an ultra-wide-angle lens, or a wide-angle lens may be used. There is an advantage that one camera can shoot a wider range than a normal lens. In particular, when it is desired to image a wide area, there is an advantage that the number of image pickup devices can be reduced by using a wide-angle lens as compared with the number of image pickup devices using an ordinary lens.
  • Each imaging device may capture a moving image or a large number of still images.
  • the partial information extraction unit of one example may extract partial information from the image based on these.
  • the target of the partial information may be, for example, a person, a pet, a vehicle, or the like.
  • the number in the information related to the boundary may be the number of people, the number of pets, the number of vehicles, and the like.
  • the statistical processing unit of one example may specify the moving direction of the partial information by using the partial information. Depending on the moving direction, there is an advantage that the moving direction of the target represented by the partial information can be arranged.
  • the system of one example is a cluster of people and some information related to feature points
  • person 3 is one person
  • person 4 is two people
  • person 5 is three people, and so on.
  • the number of targets may be specified for each cluster.
  • An example statistical processing unit may store such information.
  • FIG. 16 shows an example stored by the statistical processing unit of one example.
  • the statistical processing unit of one example may generate information indicating the direction in which an object such as a person moves. By generating the moving direction of the target, there is an advantage that the moving direction of the target can be arranged.
  • FIG. 17 is an example in which the movement direction generated by the statistical processing unit of one example is displayed.
  • the direction in which each person (3 to 5) moves is indicated by an arrow.
  • the display mode is not limited to the arrow, and may be various modes that the viewer can understand.
  • the statistical processing unit of one example may display the information related to the boundary in various images.
  • the statistical processing unit is the above-mentioned example as the information relating to the boundary, the total number of some information intersecting the boundary for a predetermined period may be used.
  • the total number of such numbers may be displayed using an image.
  • Such an image may be of various kinds as long as it can be intuitively felt by a person.
  • the transparency of patterns and gradations indicates that the lower the transparency, the higher the number, the higher the transparency, the lower the number, and the higher the amount of lines, the higher the number, and the lower the amount of lines, the lower the number.
  • the mark may be a predetermined mark.
  • a mark having a large number, a mark having a medium number, a mark having a small number, and the like may be predetermined, and such a mark may be displayed in a target area.
  • the pattern may also be associated with a time zone and displayed by an animation showing the number in each region in each time zone. In this case, the viewer has the advantage that it is easier to understand the slight change in the number at each time.
  • information related to boundaries as statistical information may be generated in real time.
  • the number related to some current information may be displayed without calculation such as the total number in a predetermined period, such as 1 minute, 3 minutes, 5 minutes, 30 minutes, 1 hour, etc.
  • the total number in a relatively short period of time may be displayed depending on the browsing situation.
  • the viewer can understand the degree of congestion.
  • the statistical processing department of one example displays a small image of the number of people on the terminal for the store so that the manager of the store is the viewer, it is crowded by allocating the staff of the store to the crowded place, guiding customers, etc. Can be dealt with appropriately.
  • the statistical processing unit of the system in the example may provide information on the congested area to the visitor, for example, in the area around the entrance of the store, so that the visitor of the store is the viewer.
  • the place displayed in real time is on the WEB, there is an advantage that the user can understand the congested place on the WEB using the mobile terminal.
  • the statistical processing unit of one example may generate, as statistical information, a relationship between a position related to some information, a staying time at the position, and a product such as a product shelf in a store. As a result, it is possible to collect information on how long the user related to the partial information is browsing the product. Further, the statistical processing unit of one example may generate the degree of interest in the product by using the staying time as the statistical information. This has the advantage that the degree of interest of the user in the product can be generated.
  • the statistical processing unit of one example may generate various rankings as information related to boundaries.
  • a ranking may be a ranking of stores in the facility based on the number of visitors of a person as a target for some information. For example, store A may be ranked first, store B may be ranked second, and so on. In this case, there is an advantage that information regarding the number of visitors to the store can be generated.
  • the statistical processing unit of one example may generate a ranking in the order of most or few stores visited before or after a specific store as a ranking. In this case, there is an advantage that information on the relationship between the user's stores can be generated depending on what kind of store the user visits before or after the specific store.
  • the user can generate information on which store is visited in a specific order.
  • the statistical processing unit of one example when the statistical processing unit of one example generates the ranking of the first visited store in the facility, the information of the first visited store in the facility can be generated.
  • the statistical processing unit of one example when the statistical processing unit of one example generates the ranking of the last visited store in the facility, there is an advantage that the information of the last visited store in the facility can be generated.
  • a ranking can be generated in a limited number of stores, which is the number of i stores.
  • the statistical processing unit of one example when the statistical processing unit of one example generates a ranking of stores when only one store is visited by the user, the user visits only one store, which is a clearly intended destination. It has the advantage of being able to generate rankings for stores.
  • the statistical processing unit of one example may generate a ranking regarding the places visited before and after a specific store as a ranking.
  • the ranking of the visited places may be the ranking of the stores and places visited before the specific store, or the ranking of the stores and places visited after the specific store. In this case, there is an advantage that it is possible to understand what kind of place is visited before and after a specific store.
  • the statistical processing unit of one example may be a ranking of stores and places visited before a predetermined number of specific stores, or a ranking of stores and places visited after a predetermined number of specific stores.
  • An example system is A partial information extraction unit that extracts partial information from images captured by imaging devices installed inside and outside the facility, An information generation unit that uses the partial information to generate information related to the target related to the partial information. May be equipped. Since such a system generates information using a part of information related to a part of the image, there is an advantage that the amount of calculation is small and the information can be generated. In particular, when some information is related to a moving part in a moving image including an image, there is a high possibility that some information is related to a moving target, so information related to the target can be efficiently used by using the learned information generation function. Has the advantage of being able to generate.
  • the proportion of people in the whole image is high, so it is possible to identify people, gender, age, age group, and height. It has the advantage of being able to generate information that indicates a person from a specific point of view, such as physique, facial expression, and information about fashion items.
  • An example system is A partial information extraction unit that extracts multiple partial information, A statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • An example system is A partial information extraction unit that extracts multiple partial information, A partial information relationship generation unit that generates a relationship using the plurality of partial information, A statistical processing unit that generates statistical information using some of the above information, May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
  • the statistical processing unit of one example uses the movement of the position related to some information based on the image acquired from the image pickup device installed in the factory to move the worker in the factory. Such information may be generated. For example, the statistical processing unit of one example may generate the movement of the worker indicated by the position related to some information. Such movement has an advantage that information indicating what kind of movement the worker is doing can be generated. In addition, such movements may be displayed. This has the advantage that the viewer can check, for example, whether or not the movement of the worker is wasteful.
  • the statistical processing unit of one example may generate and display the movement of the worker indicated by the position related to some information as an average value for a predetermined period. In this case, there is an advantage that information on the movement of the worker can be generated as an average value for a predetermined period.
  • the system of one example may have a tracking unit and track the movement of some information across a plurality of imaging devices.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to the worker based on the image captured by the image pickup device installed in the factory.
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to the worker based on the image captured by the image pickup device installed in the factory.
  • a partial information relationship generation unit that generates a relationship using the plurality of partial information,
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • the system of one example of a traffic vehicle is based on an image taken by an image pickup device such as a traffic camera installed in a place where a vehicle can be located, such as an intersection, a tollhouse on a highway, a parking lot, or a side of a road. Part information may be used.
  • an image pickup device such as a traffic camera installed in a place where a vehicle can be located, such as an intersection, a tollhouse on a highway, a parking lot, or a side of a road. Part information may be used.
  • the statistical processing unit related to the system of one example may generate the statistical information of the vehicle related to the partial information based on the partial information.
  • traffic cameras operate in an external environment and generally do not operate under favorable conditions compared to indoors. Therefore, in image analysis, a computational load is applied when a trained model is applied. It may increase. Therefore, the system of one example has an advantage that statistical information about a vehicle or the like can be generated by using the above-mentioned partial information relation generation unit when the trained model is not applied or the number of times of application thereof is reduced.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an image pickup device installed in a place where the vehicle can be located.
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an image pickup device installed in a place where the vehicle can be located.
  • a partial information relationship generation unit that generates a relationship using the plurality of partial information,
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • the system of the construction site / mine example example may use some information based on the image captured by the image pickup device installed at the construction site or the mine.
  • some information may be vehicles used at construction sites and mines.
  • Image pickup devices installed at construction sites and mines generally do not operate under favorable conditions compared to image pickup devices in facilities, so the computational load is calculated when applying a trained model in image analysis. May increase. Therefore, in the case of not applying the trained model or reducing the number of times the trained model is applied, the system of one example uses the above-mentioned partial information relation generation unit and / or target number estimation unit to obtain statistical information about the vehicle or the like. There is an advantage that it can be generated. Further, when the system of one example generates the movement of the vehicle, there is an advantage that reference information can be generated in order to move the vehicle efficiently.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an imaging device installed at a position that includes a part or all of the vehicle movement area in the field of view.
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an imaging device installed at a position that includes a part or all of the vehicle movement area in the field of view.
  • a partial information relationship generation unit that generates a relationship using the plurality of partial information
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
  • the system of one example may image an animal with an imaging device.
  • the animals may be, for example, cows, pigs, sheep, birds, horses, sheep, goats, reindeer, and the like.
  • the animal may be livestock.
  • the imaging device will be installed outdoors. In this case, a wide area is imaged by a few imaging devices, and it generally does not operate under favorable conditions compared to the imaging devices in the facility. Therefore, in image analysis, the calculation load is applied when the trained model is applied. May increase.
  • the system of one example reduces the burden of calculation amount by using the above-mentioned partial information relation generation unit and / or target number estimation unit.
  • it has the advantage of being able to generate statistical information about animals.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to animals based on an image captured by an image pickup device installed outdoors.
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped.
  • An example system is A partial information extraction unit that extracts a plurality of partial information related to animals based on an image captured by an image pickup device installed outdoors.
  • a partial information relationship generation unit that generates a relationship using the plurality of partial information
  • a statistical processing unit that generates statistical information using some of the above information, May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
  • the imaging device may be installed in the drone.
  • the drone is equipped with an image pickup device, there is an advantage that the image can be taken from a position different from that of the image pickup device installed in the facility.
  • the image pickup device provided in the drone has an advantage that the image pickup place can be set more freely by changing the flight place of the drone.
  • the hovering is controlled so that the drone equipped with the image pickup device flies at a substantially fixed position
  • the images captured by the drone are captured from substantially the same location, so that some information extraction units and
  • the partial information relation generation unit has an advantage that information on some information can be appropriately generated. Further, even in the case of moving at a low speed, the same processing as in hovering can be applied by using the image after subtracting the expected movement amount.
  • the prediction of movement preset movement information, actual movement information acquired by a gyro, GPS, or the like, or movement information estimated from the correspondence of feature points of an image may be used.
  • the target of the image captured by the drone equipped with the image pickup device may be the various targets described in 2.1 to 2.6 described above, or may be other than these.
  • An example system is A partial information extraction unit that extracts one or more partial information based on the image captured by the image pickup device provided in the drone.
  • a statistical processing unit that generates statistical information using the one or more partial information. May be equipped.
  • An example system is A partial information extraction unit that extracts one or more partial information based on the image captured by the image pickup device provided in the drone.
  • a partial information relationship generation unit that generates a relationship using the one or more partial information
  • a statistical processing unit that generates statistical information using the one or more partial information. May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
  • the statistical processing unit of the system of one example may have a notification unit that notifies the terminal of the system of one example when an object related to some information invades a predetermined area.
  • the notification unit has an advantage that the user of an example system can understand that the target related to some information has invaded a predetermined area.
  • specific areas include places where entry is prohibited, dangerous places, places where it is not desirable to enter, and the like. If a specific area is inside or outside the facility, it may be, for example, a factory site or some facility (for example, a high electric wire facility, a substation facility, a water supply facility, a hospital). If a specific area is in the house, the specific area includes, for example, a place where there is a source of fire such as a kitchen, or a water place such as a bathroom, but is not limited to these.
  • examples of the target related to some information include minors, elderly people, dementia patients, suspicious persons, and the like.
  • the users related to the system include system administrators, parents of minors, relatives of the elderly, guardians of dementia patients, etc., and the terminals related to the system are the terminals used by such persons. You can.
  • the statistical processing unit of one example notifies the terminal of the above notification at the timing of intrusion
  • the user of the system of one example can understand the intrusion in real time.
  • the user related to the notified terminal can quickly respond according to the mode of intrusion such as the intruder, the intrusion time, and the intrusion location.
  • the notification unit may be provided by the system in each of the above-mentioned 2.1 to 2.6 examples.
  • the specific place may be set as a prohibited place, a dangerous place, a place where it is not preferable to enter, etc. according to each application example, and the target related to some information is a person, It may be a vehicle, livestock, etc.
  • the external information processing device may be a cloud or a server that uses software such as SAAS, PAAS, or IAAS.
  • the processes and procedures described in the documents of the present application may be feasible not only by those explicitly described in the embodiments but also by software, hardware or a combination thereof. Further, the processes and procedures described in the documents of the present application may be able to be implemented by various computers by implementing the processes and procedures as a computer program. Further, these computer programs may be stored in a storage medium. Also, these programs may be stored on a non-transient or temporary storage medium.
  • Image pickup device 1410
  • Information processing device 1411
  • Computing device 1412
  • Storage device 1413
  • Input device 1414
  • Display device 1415
  • Bus 1416
  • Communication device 1417
  • Network 1418 AI device

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

[Problem] One example of a system according to the present invention enables an image obtained from an imaging device to be more appropriately utilized. [Solution] A system comprising: an extraction unit for extracting, from an object image, some information in the object image that pertains to a moving portion in an moving-image that includes the object image; and a generation unit for generating, on the basis of the extracted some information, information about an object that pertains to the some information. A system in which the generation unit generates information about the object that pertains to the some information by applying the image that pertains to the some information to a machine learning unit having learned a relationship between the image and the object. A system in which the extraction unit extracts the some information using the motion of the object in the moving-image. A system in which the extraction unit extracts the some information using a difference between one image and other images out of a plurality of images constituting the moving-image. 

Description

[規則37.2に基づきISAが決定した発明の名称] システム、方法、又はプログラム[Name of invention determined by ISA based on Rule 37.2.] System, method, or program
 本出願において開示された技術は、情報処理システム、情報処理装置、サーバ装置、プログラム、又は方法に関する。 The technology disclosed in this application relates to an information processing system, an information processing device, a server device, a program, or a method.
 近年、画像内の情報を活用する手法が研究されている。 In recent years, methods for utilizing information in images have been studied.
特開2019-48365号公報JP-A-2019-48365 特開2012-164255号公報Japanese Unexamined Patent Publication No. 2012-164255 特開2013-239011号公報Japanese Unexamined Patent Publication No. 2013-239011 特開2010-118862号公報JP-A-2010-118862 特開2009-110185号公報JP-A-2009-110185
 しかしながら、特許文献1及び2に係る発明は、いずれも、対象の一部に対して精度を向上させるものにすぎない。また、特許文献3に係る発明は、クラスタリング自体の改良に留まる。また、特許文献4に係る発明は、背景移動を行うことができるにとどまる。さらに、特許文献5に係る発明は、カメラを利用する問題点を指摘するが、レーザを必要とする。また、非特許文献1に係る発明は、具体的な発明に至っていない。そこで、本発明の様々な実施形態は、上記の課題を解決するために、情報処理システム、情報処理装置、サーバ装置、プログラム、又は方法を提供する。 However, the inventions according to Patent Documents 1 and 2 only improve the accuracy with respect to a part of the subject. Further, the invention according to Patent Document 3 is limited to improvement of clustering itself. Further, the invention according to Patent Document 4 can only move the background. Further, the invention according to Patent Document 5 points out a problem of using a camera, but requires a laser. Moreover, the invention according to Non-Patent Document 1 has not reached a concrete invention. Therefore, various embodiments of the present invention provide an information processing system, an information processing device, a server device, a program, or a method in order to solve the above problems.
 一実施形態の第1のシステムは、
 対象画像から、前記対象画像を含む動画において動きのある部分に係る、前記対象画像内の一部情報を抽出する抽出部と、
 抽出された前記一部情報に基づき、前記一部情報に係る対象の情報を生成する生成部と、
を備えたシステム。
The first system of one embodiment is
An extraction unit that extracts a part of information in the target image related to a moving part in a moving image including the target image from the target image.
A generation unit that generates target information related to the partial information based on the extracted partial information.
System with.
 また、一実施形態の第1のシステムは、上述に代えて、
 対象画像から、一部情報を抽出する抽出部と、
 抽出された前記一部情報に係る画像に基づき、前記一部情報に係る対象の情報を生成する生成部と、
を備え、
前記一部情報に係る画像は、背景領域の一部の画像情報量が、前記対象画像内において前記一部と対応する箇所の画像情報量よりも少ないものである、
システムであってよい。
Further, in the first system of one embodiment, instead of the above,
An extraction unit that extracts some information from the target image,
A generation unit that generates target information related to the partial information based on the extracted image related to the partial information.
With
In the image related to the partial information, the amount of image information of a part of the background region is smaller than the amount of image information of the portion corresponding to the part in the target image.
It can be a system.
 一実施形態の第2のシステムは、
 前記生成部は、前記一部情報に係る画像を、画像と対象との関係を機械学習した機械学習部に適用させることで、前記一部情報に係る画像に係る対象の情報を生成する、
第1のシステム。
The second system of one embodiment
The generation unit generates information on the target related to the image related to the partial information by applying the image related to the partial information to the machine learning unit in which the relationship between the image and the target is machine-learned.
The first system.
 一実施形態の第3のシステムは、
 前記抽出部は、前記動画における対象の動きを用いて、前記一部情報を抽出する、
第1乃至2のいずれか1のシステム。
The third system of one embodiment is
The extraction unit extracts a part of the information by using the movement of the target in the moving image.
The system of any one of the first and the second.
 一実施形態の第4のシステムは、
 前記抽出部は、前記動画を構成する複数の画像のうち、一の画像と、他の画像と、の差を用いて、前記一部情報を抽出する、
第1乃至3のいずれか1のシステム。
The fourth system of one embodiment is
The extraction unit extracts a part of the information by using the difference between one image and the other image among the plurality of images constituting the moving image.
The system of any one of the first to the third.
 一実施形態の第5のシステムは、
 前記一部情報は、前記動画における対象の動きに相当する部分を含むものである、
第1乃至4のいずれか1のシステム。
The fifth system of one embodiment is
The partial information includes a part corresponding to the movement of the target in the moving image.
The system of any one of the first to four.
 一実施形態の第6のシステムは、
 前記生成部は、
 前記一部情報に係る画像を送信する送信部と、
 送信された前記一部情報に係る画像に対応する対象の情報を取得する取得部と、
第1乃至5のいずれか1のシステム。
The sixth system of one embodiment is
The generator
A transmitter that transmits an image related to the partial information,
An acquisition unit that acquires target information corresponding to an image related to the transmitted partial information, and an acquisition unit.
The system of any one of the first to fifth.
 一実施形態の第7のシステムは、
 前記生成部は、画像と対象との関係を機械学習した機械学習部を備え、
 前記一部情報に係る画像を前記機械学習に適用させることにより、前記一部情報に係る画像に係る対象の情報を生成する、
第1乃至6のいずれか1のシステム。
The seventh system of one embodiment is
The generation unit includes a machine learning unit that machine-learns the relationship between an image and an object.
By applying the image related to the partial information to the machine learning, the target information related to the image related to the partial information is generated.
The system of any one of the first to sixth.
 一実施形態の第8のシステムは、
 前記一部情報は、前記対象画像の一部の画像である、
第1乃至7のいずれか1のシステム。
The eighth system of one embodiment is
The partial information is a partial image of the target image.
The system of any one of the first to seventh.
 一実施形態の第9のシステムは、
 前記一部情報は、前記対象画像の一部の画像の特徴点に係る情報である、
第1乃至8のいずれか1のシステム。
The ninth system of one embodiment is
The partial information is information relating to feature points of a partial image of the target image.
The system of any one of the first to eighth.
 一実施形態の第10のシステムは、
 前記一部情報は、前記対象画像の背景差分に係る情報である、
第1乃至9のいずれか1のシステム。
The tenth system of one embodiment is
The partial information is information related to background subtraction of the target image.
The system of any one of the first to nine.
 一実施形態の第11のシステムは、
 前記一部情報は、前記対象画像の背景差分を2値化した画像である、
第1乃至10のいずれか1のシステム。
The eleventh system of one embodiment is
The partial information is an image obtained by binarizing the background difference of the target image.
The system of any one of the first to tenth.
 一実施形態の第12のシステムは、
 前記システムは、
 前記一部情報に係るクラスタに対し、予め定められたルールに基づいて、対象の数を推定する推定部と、
を備える、
第1乃至11のいずれか1のシステム。
The twelfth system of one embodiment is
The system
An estimation unit that estimates the number of targets based on predetermined rules for the cluster related to the partial information.
To prepare
The system of any one of the first to eleventh.
 一実施形態の第13のシステムは、
 前記システムは、統計情報を生成する統計処理部をさらに備える、
第1乃至12のいずれか1のシステム。
The thirteenth system of one embodiment is
The system further includes a statistical processing unit that generates statistical information.
The system of any one of 1st to 12th.
 一実施形態の第14の方法は、
 コンピュータが、
 対象画像から、前記対象画像を含む動画において動きのある部分に係る一部の画像を抽出するステップと、
 抽出された前記一部の画像に基づき、前記一部の画像に係る対象の情報を生成するステップと、
を実行する方法。
The fourteenth method of one embodiment is
The computer
A step of extracting a part of a moving part of a moving image including the target image from the target image, and
A step of generating target information related to the part of the image based on the extracted part of the image, and
How to do.
 一実施形態の第15のプログラムは、
 コンピュータを、第1乃至第13のいずれか一のシステムとして機能させるためのプログラム。
The fifteenth program of one embodiment
A program for operating a computer as any one of the first to thirteenth systems.
 一実施形態の第16のシステムは、
 動画を構成する複数の画像のうちの少なくとも2つの各画像内の対応する特徴点に基づいてクラスタを生成するクラスタリング部と、
 予め定められたルールに基づいて、クラスタの対象の数を推定する推定部と、
を備えたシステム。
The sixteenth system of one embodiment
A clustering unit that generates clusters based on corresponding feature points in at least two images of a plurality of images constituting a moving image, and a clustering unit.
An estimation unit that estimates the number of objects in the cluster based on predetermined rules,
System with.
 一実施形態の第17のシステムは、
 前記予め定められたルールは、前記画像の取得されたカメラと関連付けられている、
第1乃至16のいずれか1のシステム
The seventeenth system of one embodiment
The predetermined rule is associated with the camera from which the image was acquired.
The system of any one of 1st to 16th
 一実施形態の第18のシステムは、
 前記抽出部は、前記動画を構成する、第1対象画像と、第2対象画像と、において、
  前記第1対象画像内の第1一部画像と、
  前記第2対象画像内の第2一部画像と、
を抽出し、
 前記第1一部画像と前記第2一部画像との関係性を判定する判定部と、
を備える第1乃至17のいずれか1のシステム。
The eighteenth system of one embodiment
The extraction unit is used in the first target image and the second target image constituting the moving image.
The first part image in the first target image and
The second part image in the second target image and
Extract and
A determination unit that determines the relationship between the first part image and the second part image,
The system of any one of 1 to 17 comprising.
 一実施形態の第19のシステムは、
 前記システムは、前記対象に係る位置を用いて、統計処理を行う統計処理部を備える、
第1乃至18のいずれか1のシステム。
The nineteenth system of one embodiment
The system includes a statistical processing unit that performs statistical processing using the position related to the target.
The system of any one of 1st to 18th.
 一実施形態の第20のプログラムは、
 コンピュータを、第1乃至19のいずれか一のシステムとして機能させるためのプログラム。
The twentieth program of one embodiment is
A program for operating a computer as any one of the first to nineteen systems.
 本発明の一実施形態により、より適切に画像から得たデータを活用できる。 According to one embodiment of the present invention, the data obtained from the image can be utilized more appropriately.
図1は、一実施例に係る情報処理装置の機能の具体例を示すブロック図である。FIG. 1 is a block diagram showing a specific example of the function of the information processing apparatus according to the embodiment. 図2は、一実施例に係る情報の抽出を模式的に示した図である。FIG. 2 is a diagram schematically showing the extraction of information according to one embodiment. 図3は、一実施例に係るシステムが有するデータを模式的に示す一例である。FIG. 3 is an example schematically showing the data possessed by the system according to the embodiment. 図4は、一実施例に係るシステムによる情報の抽出を模式的に示した図である。FIG. 4 is a diagram schematically showing the extraction of information by the system according to one embodiment. 図5は、一実施例に係るシステムが有するデータを模式的に示す一例である。FIG. 5 is an example schematically showing the data possessed by the system according to the embodiment. 図6は、一実施例に係るシステムにおいて処理される情報の関係を模式的に示す一例である。FIG. 6 is an example schematically showing the relationship of information processed in the system according to one embodiment. 図7は、一実施例に係るクラスタ数を説明する一例である。FIG. 7 is an example for explaining the number of clusters according to one embodiment. 図8は、一実施例に係るシステムによって表示される画面の一例である。FIG. 8 is an example of a screen displayed by the system according to the embodiment. 図9は、一実施例に係るシステムによって扱われる境界の一例を説明する図である。FIG. 9 is a diagram illustrating an example of a boundary handled by the system according to an embodiment. 図10は、一実施例に係るシステムによって表示される画面の一例である。FIG. 10 is an example of a screen displayed by the system according to the embodiment. 図11は、一実施例に係るシステムによって表示される画面の一例である。FIG. 11 is an example of a screen displayed by the system according to the embodiment. 図12は、一実施例に係るシステムによって表示される画面の一例である。FIG. 12 is an example of a screen displayed by the system according to the embodiment. 図13は、一実施例に係るシステムが有するデータを模式的に示す一例である。FIG. 13 is an example schematically showing the data possessed by the system according to the embodiment. 図14は、一実施例に係るシステムの構成を含む全体像を示すブロック図である。FIG. 14 is a block diagram showing an overall picture including the configuration of the system according to one embodiment. 図15は、一実施例に係るシステムが対応する施設を模式的に示す一例である。FIG. 15 is an example schematically showing the facility corresponding to the system according to the embodiment. 図16は、一実施例に係るシステムが有するデータを模式的に示す一例である。FIG. 16 is an example schematically showing the data possessed by the system according to the embodiment. 図17は、一実施例に係るシステムによって表示される画面の一例である。FIG. 17 is an example of a screen displayed by the system according to the embodiment. 図18は、一実施例に係るシステムによって処理されるフローの一例である。FIG. 18 is an example of a flow processed by the system according to one embodiment. 図19は、一実施例に係るシステムによって処理されるフローの一例である。FIG. 19 is an example of a flow processed by the system according to the embodiment. 図20は、一実施例に係るシステムによって処理されるフローの一例である。FIG. 20 is an example of a flow processed by the system according to one embodiment.
1.本例のシステムの機能
 本例のシステムにおける機能について、図1を参照して説明する。本図は、本例のシステムに係る機能の具体例を示すブロック図である。一例のシステムは、一部情報抽出部と、一部情報関係生成部と、位置特定部と、対象数推定部と、情報生成部と、境界部と、統計処理部と、追跡部と、の一部又は全部を備えてよい。これらの各機能は、システムにおいて、全てが必須の機能ではなく、一部であっても、各機能に相当する技術的意義を有するものであり、また、それらの組み合わせにおいても、各組み合わせに応じた技術的意義を有するものである。
1. 1. Functions of the system of this example The functions of the system of this example will be described with reference to FIG. This figure is a block diagram showing a specific example of the function related to the system of this example. An example system consists of a partial information extraction unit, a partial information relationship generation unit, a position identification unit, a target number estimation unit, an information generation unit, a boundary unit, a statistical processing unit, and a tracking unit. It may have some or all. All of these functions are not essential functions in the system, and even if they are a part, they have technical significance corresponding to each function, and even in their combinations, depending on each combination. It has technical significance.
1.1.一部情報抽出部11
 一部情報抽出部は、対象画像から、一部情報を抽出する機能を有する。対象画像は、動画を構成する複数の画像の一であってよい。一部情報は、前記対象画像を含む動画において動きのある部分に係るものであってよく、例えば、前記動画における対象の動きに相当する部分や、前記動画における対象の動きに相当する部分を含むものや、前記動画における対象の動きに相当する部分を含むものの一部、などであってよい。また、一部情報は、対象画像の一部の画像であってもよいし、対象画像に係る対象に係る特徴点であってもよい。一部情報は、対象画像の面積が一部の情報であってもよいし、対象画像の画像情報が一部の情報であってもよいし、これらの組み合わせであってもよい。画像情報が一部の情報である場合としては、例えば、一部情報に係る画像がビットマップ画像であれば、画素数を減少させたものであってもよいし、各画素のビット数を減少させたものであってもよい。また、一部情報に係る画像がベクタ画像であれば、構成する図形の一部であってもよい。なお、画像情報の量を、画像情報量ということもある。画像情報量の例としては、例えば、画素数であったり、各画素のビット数であったり、色数であったり、図形の数などが挙げられるが、画像の情報量を示すものであれば、どのようなものであってもよい。
1.1. Partial information extraction unit 11
The partial information extraction unit has a function of extracting partial information from the target image. The target image may be one of a plurality of images constituting the moving image. The partial information may relate to a moving part in the moving image including the target image, and includes, for example, a part corresponding to the movement of the target in the moving image and a part corresponding to the movement of the target in the moving image. It may be a thing, a part of a thing including a part corresponding to the movement of the object in the moving image, and the like. Further, the partial information may be a part of the target image or may be a feature point related to the target related to the target image. The area of the target image may be a part of the information, the image information of the target image may be a part of the information, or a combination thereof may be used as the partial information. When the image information is a part of the information, for example, if the image related to the part of the information is a bitmap image, the number of pixels may be reduced, or the number of bits of each pixel may be reduced. It may be made to. Further, if the image related to some information is a vector image, it may be a part of a constituent figure. The amount of image information may also be referred to as the amount of image information. Examples of the amount of image information include the number of pixels, the number of bits of each pixel, the number of colors, the number of figures, etc., as long as it indicates the amount of information in the image. , Anything can be used.
 一例の一部情報抽出部は、一部情報を抽出するため、一部画像抽出部、及び/又は、特徴点抽出部、を有してよい。 The partial information extraction unit of one example may have a partial image extraction unit and / or a feature point extraction unit in order to extract partial information.
1.1.1.一部画像抽出部
 一部画像抽出部は、対象画像から、一部の画像を抽出する機能を有する。一例の差分部は、複数の画像から構成される動画において、一の画像と、他の画像と、の差によって、画像の差分となる一部の画像を抽出してよい。一部の画像は、前記一の画像内の一部であってもよいし、前記他の画像の一部であってもよい。一の画像と他の画像とは、動画の時系列上次の画像であってもよいし、動画において一の画像と他の画像との間に数マイクロ秒から数秒などの一定期間経過後のものであってもよい。一の画像と他の画像との間の時間間隔が所定の期間以上の場合、計算量を減少できる利点がある。なお、一部画像抽出部は、対象画像から、一又は複数の、一部の画像を抽出してよい。
1.1.1. Partial image extraction unit The partial image extraction unit has a function of extracting a part of an image from a target image. In the moving image composed of a plurality of images, the difference portion of one example may extract a part of the images that are the differences between the images due to the difference between one image and the other images. The part of the image may be a part of the one image or a part of the other image. The one image and the other image may be the next image in the time series of the moving image, or after a certain period of time such as several microseconds to several seconds has elapsed between the one image and the other image in the moving image. It may be a thing. When the time interval between one image and the other image is longer than a predetermined period, there is an advantage that the amount of calculation can be reduced. The partial image extraction unit may extract one or a plurality of partial images from the target image.
 撮像装置が車両、人、動物(ペット含む)など(以下、「車両等」ということもある)が通る道路上を撮像するような交通カメラである場合を例にとって説明すると、一の画像と他の画像の差は、かかる一の画像と他の画像の時間差によって動画において動きのある部分に係る車両等などであってよい。 Taking the case where the image pickup device is a traffic camera that captures images on the road through which vehicles, people, animals (including pets), etc. (hereinafter, also referred to as "vehicles, etc.") pass, one image and others The difference between the images may be a vehicle or the like related to a portion of the moving image that moves due to the time difference between the one image and the other image.
 また、撮像装置が後述する施設内のカメラである場合を例にとって説明すると、一の画像と他の画像の差は、かかる一の画像と他の画像の時間差によって動画において動きのある部分に係る、施設内にいる、利用者、管理者、サービス提供者、ペットなど(以下、「利用者等」ということもある)であってよい。 Further, for example, when the image pickup device is a camera in a facility described later, the difference between one image and another image is related to a moving part in a moving image due to the time difference between the one image and another image. , Users, managers, service providers, pets, etc. (hereinafter, may be referred to as "users, etc.") in the facility.
 また、上述の一の画像又は他の画像のいずれか一方は、背景画像であってもよい。例えば、自動車等が通る道路上を撮像する交通カメラで撮像された画像においては、自動車等が不在の時間帯に撮像された画像を背景画像としてよく、後述する施設内のカメラで撮像された画像においては、施設内に利用者等が不在の状態で撮像された画像を背景画像としてもよい。また、自動車等を含む画像であっても、これらを削除処理した画像を背景画像としてもよい。 Further, either one of the above-mentioned images or the other image may be a background image. For example, in an image captured by a traffic camera that captures a road on which a vehicle or the like passes, an image captured during a time when the vehicle or the like is absent may be used as a background image, and an image captured by a camera in a facility described later. In the above, an image taken in a state where a user or the like is absent in the facility may be used as a background image. Further, even if the image includes an automobile or the like, the image obtained by deleting these images may be used as the background image.
1.1.2.特徴点抽出部
 特徴点抽出部は、対象画像から、対象画像の一部における、特徴点を抽出する機能を有する。一例の特徴点抽出部は、複数の画像から構成される動画において、一の画像に係る特徴点と、他の画像に係る特徴点と、の差分によって、特徴点を抽出してよい。特徴点は、前記一の画像内の一部であってもよいし、前記他の画像の一部であってもよい。一の画像と他の画像の関係は、上述と同様である。なお、特徴点抽出部は、一の対象画像から、一又は複数の特徴点を抽出してよい。
1.1.2. Feature point extraction unit The feature point extraction unit has a function of extracting feature points in a part of the target image from the target image. The feature point extraction unit of one example may extract feature points by the difference between the feature points related to one image and the feature points related to another image in a moving image composed of a plurality of images. The feature point may be a part of the one image or a part of the other image. The relationship between one image and the other image is the same as described above. The feature point extraction unit may extract one or a plurality of feature points from one target image.
 特徴点は、画像内の対象の一部を示すものであってよい。例えば、対象が人やペット等の生き物であれば、頭、耳、目、鼻、口、肩、肘、手、腰、膝、足首などの一部又は全部であってよく、対象が車両であれば、バンパー、車輪、ルーフ、ピラー、サイドシラー、車両の輪郭、などの一部又は全部であってよい。これらはいずれも例示であって、各特徴点が、具体的に人又は車両の特定の箇所と具体的に結びついている必要は無く、用いられた手法によって特徴点として得られた箇所が、特徴点であってよい。また、特徴点特定部は、その他、画像内の特徴点は周知な手法により特定されてよい。なお、画像から特定された一又は複数の特徴点が、一の対象Aと関連付けられている場合、かかる特徴点を、「Aの特徴点」ということもある。 The feature point may indicate a part of the object in the image. For example, if the target is a creature such as a person or a pet, it may be a part or all of the head, ears, eyes, nose, mouth, shoulders, elbows, hands, waist, knees, ankles, etc., and the target is a vehicle. If there is, it may be a part or all of bumpers, wheels, roofs, pillars, side schillers, vehicle contours, and so on. All of these are examples, and it is not necessary for each feature point to be specifically linked to a specific point of a person or vehicle, and the point obtained as a feature point by the method used is a feature. It can be a point. In addition, the feature point identification unit may specify other feature points in the image by a well-known method. When one or more feature points identified from the image are associated with one target A, such feature points may be referred to as "feature points of A".
1.2.一部情報関係生成部12
 一部情報関係生成部は、対象画像内の一部情報の関係を生成する機能を有する。一例の一部情報関係生成部は、動画を構成する第1対象画像と第2対象画像において、前記第1対象画像内の第1一部情報と、前記第2対象画像内の第2一部情報と、の関係を生成する機能を有してよい。
1.2. Partial information relation generation unit 12
The partial information relationship generation unit has a function of generating a relationship of partial information in the target image. In the first target image and the second target image constituting the moving image, the partial information relation generation unit of one example includes the first part information in the first target image and the second part in the second target image. It may have a function of generating a relationship with information.
 関係は、第1一部情報と、第2一部情報と、の関係の有無であってもよいし、関係の高さの指標であってもよい。例えば、第1一部情報と第2一部情報の関係があるというものでもよいし、第1一部情報と第2一部情報の関係がないというものでもよいし、第1一部情報と第2一部情報の関係の高さでもよい。関係の有無は、第1対象画像内の第1一部情報に係る対象と、第2対象画像内の第2一部情報に係る対象とが、実世界において同一のものであるかどうかの関係であってよい。なお、この同一性は、実世界において、単一の対象が同一であってもよいし、複数の対象が同一であってもよく、また複数の対象の場合は同じ種類の組み合わせでも異なる種類の組み合わせであってもよい。このような意味において、以降では、「疑似同一対象」ということもある。疑似同一対象の例としては、人、車両、ペットや、人と人、人と自転車、人とペット、自動車とその中の人、車両を運搬中の車両、などが挙げられ、要するに、撮像装置によって撮像された動画の画像内において、一定の領域として共に移動するものであってよい。また、関係の高さは、第1対象画像内の第1一部情報に係る対象と、第2対象画像内の第2一部情報に係る対象とが、疑似同一対象である可能性の高さを示す指標であってよい。なお、指標は、数値であってもよいし、予め与えられた複数のランクの一であってもよい。 The relationship may be the presence or absence of a relationship between the first part information and the second part information, or may be an index of the height of the relationship. For example, it may be that there is a relationship between the first part information and the second part information, that there is no relationship between the first part information and the second part information, or that there is a relationship between the first part information and the first part information. The second part may be highly related to information. Whether or not there is a relationship is the relationship between whether the target related to the first part information in the first target image and the target related to the second part information in the second target image are the same in the real world. It may be. In the real world, this identity may be that a single object is the same, a plurality of objects may be the same, and in the case of a plurality of objects, the same type of combination may have different types. It may be a combination. In this sense, it may be referred to as "pseudo-identical object" hereafter. Examples of pseudo-identical objects include people, vehicles, pets, people to people, people to bicycles, people to pets, automobiles and people in them, vehicles carrying vehicles, etc., in short, an imaging device. It may move together as a certain area in the image of the moving image captured by. Further, regarding the height of the relationship, there is a high possibility that the target related to the first part information in the first target image and the target related to the second part information in the second target image are pseudo-identical targets. It may be an index indicating the height. The index may be a numerical value or one of a plurality of ranks given in advance.
 また、関係は、第1一部情報と、第2一部情報と、の具体的な対応関係であってもよい。例えば、第1一部情報の第11箇所と、第2一部上の第21箇所と、の対応関係、第1一部情報の第12箇所と、第2一部上の第22箇所と、の対応関係、などの関係であってよい。ここで、第11箇所、第12箇所、第21箇所、第22箇所は、夫々、第1一部情報及び第2一部情報内に係る情報であってよく、画像内の位置や、特徴点、ベクトル、クラスタ、などで特定されてよい。 Further, the relationship may be a specific correspondence relationship between the first part information and the second part information. For example, the correspondence between the 11th place of the 1st part information and the 21st place on the 2nd part, the 12th place of the 1st part information, and the 22nd place on the 2nd part. It may be a correspondence relationship of. Here, the 11th place, the 12th place, the 21st place, and the 22nd place may be information related to the first part information and the second part information, respectively, and may be a position in an image or a feature point. , Vector, cluster, etc.
 一例の一部情報関係生成部は、一部情報と関連付けて、関係を記憶してよい。例えば、一例の一部情報関係生成部は、第1一部情報と、第2一部情報と、の関係を記憶してよい。 The partial information relationship generation unit of the example may store the relationship in association with the partial information. For example, the partial information relationship generation unit of one example may store the relationship between the first part information and the second part information.
1.2.1.画像例
 一例の一部情報関係生成部が一部情報として画像を処理可能な場合、一例の一部情報関係生成部は、第1一部情報に係る第1一部画像と、第2一部画像に係る第2一部画像と、の関係を生成してよい。
1.2.1. When the partial information relation generation unit of the image example can process the image as partial information, the partial information relation generation unit of the example is the first part image related to the first part information and the second part. The relationship with the second part image related to the image may be generated.
 一例の一部情報関係生成部が、第1対象画像内の第1一部画像と、第2対象画像内の第2一部画像と、の関係を生成する手法は、画像同士のパターンマッチや、画像同士の密度の比較、画像同士の位置の近さなど、第1一部画像と第2一部画像とが同一の対象に相当することを示す種々の手法が用いられてよく、その手法に限定はない。なお、第1一部画像と第2一部画像とは、いずれも、撮像装置によって撮像された画像の一部の画像であり、面積が少ないことから、関係を生成するための処理等の計算量の負担が少ない利点がある。 The method in which the partial information relationship generation unit of one example generates the relationship between the first part image in the first target image and the second part image in the second target image is a pattern matching between images. , Comparison of densities between images, closeness of positions between images, etc., various methods may be used to show that the first part image and the second part image correspond to the same object. There is no limit to. Note that the first part image and the second part image are both images of a part of the image captured by the image pickup apparatus, and since the area is small, calculations such as processing for generating the relationship are performed. There is an advantage that the burden of quantity is small.
 画像同士の密度は、例えば、画像のシルエットや画像の白黒等の二値画像による全体画像内の黒又は白の占める割合の差が所定の範囲以内であるかどうかにより判定されてよく、このような撮像時よりも少ないビット数で表現した画像を用いて関係性を生成した場合、計算速度の向上及び画像のプライバシーによる制約を減少できる利点がある。特に情報処理装置内で人や車両を含む画像データを扱うにあたり、プライバシーによる制約を減少させること近年高まる要請に応えるものである。 The density between images may be determined by whether or not the difference in the proportion of black or white in the entire image due to the binary image such as the silhouette of the image or the black and white of the image is within a predetermined range. When a relationship is generated using an image expressed with a smaller number of bits than at the time of normal imaging, there are advantages that the calculation speed can be improved and restrictions due to image privacy can be reduced. In particular, when handling image data including people and vehicles in an information processing device, it meets the increasing demand in recent years to reduce restrictions due to privacy.
 画像同士の位置の近さは、例えば、画像01a内の一部画像02aの位置と、画像01b内の一部画像02bの位置との近さを用いてよい。 As the closeness of the positions of the images, for example, the closeness of the position of the partial image 02a in the image 01a and the position of the partial image 02b in the image 01b may be used.
 図2は、一例の一部情報抽出部が、一部情報の抽出、を模式的に示した例である。本例においては、撮像装置において撮像された動画を構成する画像01a乃至01dは、この順で、時系列に撮像されたものであり、各画像内の一部の画像である02a乃至02dは、各隣合う画像において、一部情報として、差に係る画像が抽出された例である。これらの一部の画像の02a乃至02dは、動きに係る部分として抽出された一部の画像であってよい。なお、一部画像02a乃至02dは、夫々、画像01a乃至01dの一部であってよい。 FIG. 2 is an example in which the partial information extraction unit of one example schematically shows the extraction of partial information. In this example, the images 01a to 01d constituting the moving image captured by the imaging device are captured in chronological order in this order, and 02a to 02d, which are some images in each image, are This is an example in which an image related to a difference is extracted as a part of information in each adjacent image. 02a to 02d of some of these images may be a part of the images extracted as the portion related to the movement. The partial images 02a to 02d may be a part of the images 01a to 01d, respectively.
 本例において、一例の一部情報関係生成部が、一部情報としての一部の画像について、関係を生成する例を説明する。例えば、02aと02bとの関係を生成してよい。一例の一部情報関係生成部は、一部の画像について、パターンマッチや、画像同士の密度により、02aと02bとが、いずれも、同一の対象に相当するものであることを示してよい。一例の一部情報関係生成部が同一の対象に相当する画像であると判定した場合、一例の一部情報関係生成部は、同一の対象として、一の追跡IDと関連付けて、一部画像を記憶してよい。図3は、追跡IDの追跡ID01に、一部画像ID001、003006が関連付けて記憶されている例であり、追跡IDの追跡ID02に、一部画像ID002、004、009が関連付けて記憶されている例である。かかる構成により、画像の一部に現れた対象が、動画を構成する異なる画像において動いても、どこに移動しているか判定することができる。 In this example, an example in which the partial information relationship generation unit of one example generates a relationship for a part of images as partial information will be described. For example, the relationship between 02a and 02b may be generated. The partial information relation generation unit of one example may indicate that both 02a and 02b correspond to the same target for some images by pattern matching and the density of the images. When it is determined that the partial information relation generation unit of one example is an image corresponding to the same target, the partial information relation generation unit of one example sets the partial image as the same target in association with one tracking ID. You may remember. FIG. 3 shows an example in which some image IDs 001 and 003006 are associated and stored with the tracking ID 01 of the tracking ID, and some images IDs 002, 004, 009 are associated and stored with the tracking ID 02 of the tracking ID. This is an example. With such a configuration, it is possible to determine where the target appearing in a part of the image is moving even if it moves in different images constituting the moving image.
 なお、図4のように、画像01bから2以上の一部情報としての一部の画像が複数抽出された場合、一例の一部情報関係生成部は、一部の画像02aと、一部の画像02bと、の関係の生成と、一部の画像02aと一部の画像03bとの関係の生成と、を行ってよく、その上で、かかる複数の一部の画像の中から、一部の画像02aと関係の高さの高い一部の画像を特定してよい。これにより、画像内の対象の移動速度が速い場合などにおいても、より適切に、対象を追跡できる可能性が高まる利点がある。 As shown in FIG. 4, when a plurality of partial images as two or more partial information are extracted from the image 01b, the partial information relation generation unit of one example includes a partial image 02a and a partial image. The relationship between the image 02b and the image 02b and the relationship between the part image 02a and the part image 03b may be generated, and then a part of the plurality of images may be generated. A part of the images having a high relationship with the image 02a may be specified. This has the advantage of increasing the possibility that the target can be tracked more appropriately even when the moving speed of the target in the image is high.
 また、一例の一部情報関係生成部は、上述の複数の候補の画像が存在する場合において、近さが近いものから順に関係を生成し、かかる関係の高さが所定の値より高い場合は、他の候補の画像に対する関係を生成しなくてもよい。この場合、計算量が減少する利点がある。 Further, in the case where the above-mentioned plurality of candidate images exist, the partial information relationship generation unit of one example generates relationships in order from the one having the closest proximity, and when the height of the relationship is higher than a predetermined value, , It is not necessary to generate a relationship with other candidate images. In this case, there is an advantage that the amount of calculation is reduced.
1.2.2.特徴点例
 一例の一部情報関係生成部が一部情報として特徴点を処理可能な場合、一例の一部情報関係生成部は、第1一部情報に係る一又は複数の第1特徴点と、第2一部画像に係る一又は複数の第2特徴点と、の関係を生成してよい。前記関係は、前記一又は複数の第1特徴点と前記一又は複数の第2特徴点との対応関係であってもよいし、前記第1特徴点と前記第2特徴点から生成された一又は複数のベクトルであってもよいし、前記一又は複数のベクトルから生成された一又は複数のクラスタであってもよい。
1.2.2. When the partial information relation generation unit of an example of a feature point can process a feature point as partial information, the partial information relation generation unit of an example is associated with one or more first feature points related to the first part information. , The relationship with one or more second feature points related to the second part image may be generated. The relationship may be a correspondence relationship between the one or more first feature points and the one or more second feature points, or one generated from the first feature point and the second feature point. Alternatively, it may be a plurality of vectors, or may be one or a plurality of clusters generated from the one or a plurality of vectors.
1.2.2.1.ベクトル特定部
 ベクトル特定部は、特徴点のベクトルを特定する機能を有する。特徴点のベクトルは、前後の画像から特定してよい。例えば、第1画像の次が第2画像の場合、第1画像内の特徴点Aを始点とし、第2画像内の前記特徴点Aに対応する特徴点Bを終点とするベクトルを特定してよい。例えば、ベクトル部は、1画像内の特徴点Aに対し、前記特徴点Aに対応する第2画像内の特徴点Bを決定し、ベクトルを特定する手法において、第1画像内の特徴点Aに対応する第2画像内の特徴点Bを決定してよい。また、ベクトル部は、その他、周知なベクトルの特定方法を用いてよい。
1.2.2.1. Vector specifying part The vector specifying part has a function of specifying a vector of feature points. The feature point vector may be specified from the preceding and following images. For example, when the second image is next to the first image, a vector having a feature point A in the first image as a start point and a feature point B corresponding to the feature point A in the second image as an end point is specified. Good. For example, the vector unit determines the feature point B in the second image corresponding to the feature point A with respect to the feature point A in one image, and in the method of specifying the vector, the feature point A in the first image. The feature point B in the second image corresponding to may be determined. In addition, as the vector part, a well-known method for specifying a vector may be used.
 ここで、ベクトルが所定の値よりも小さいものは、そのまま用いてもよいし、排除してもよい。ベクトルが所定の値よりも小さいものを排除するのは、第1画像と第2画像間の動きが微小であると判断できるためであり、これにより、計算量を減少できる利点がある。 Here, a vector whose vector is smaller than a predetermined value may be used as it is or may be excluded. The reason why the vector whose vector is smaller than the predetermined value is excluded is that it can be determined that the movement between the first image and the second image is minute, which has an advantage that the amount of calculation can be reduced.
1.2.2.2.クラスタリング部
 クラスタリング部は、ベクトルをクラスタリングする機能を有する。クラスタリング部は、クラスタリングされたクラスタを生成してよい。クラスタリングは、ベクトルの位置に基づいてもよいし、ベクトルの長さに基づいてもよいし、ベクトルの位置と長さに基づいてもよい。また、クラスタリング部は、その他、周知なクラスタリング手法を用いてよい。
1.2.2.2. Clustering unit The clustering unit has a function of clustering vectors. The clustering unit may generate clustered clusters. Clustering may be based on the position and length of the vector, the length of the vector, or the position and length of the vector. In addition, a well-known clustering method may be used for the clustering unit.
 クラスタリング部が、所定の値よりも大きいベクトルの位置を異なるクラスタに分類するクラスタリング手法など、クラスタリングをベクトルの位置を用いて処理する場合、画像内の物体が一定の面積を有するため、異なる物体に係る特徴点であると推定できる利点がある。 When the clustering unit processes clustering using vector positions, such as a clustering method that classifies the positions of vectors larger than a predetermined value into different clusters, the objects in the image have a certain area, so different objects can be used. There is an advantage that it can be presumed to be such a feature point.
 また、クラスタリング部が、所定の値よりも大きいベクトルの長さを同一のクラスタに分類したり、所定の範囲のベクトルの長さを同一のクラスタに分類するなどのクラスタリング手法など、ベクトルの長さを用いる場合、同じ速度のベクトルに係る特徴点を同じ物体に属すると判定できる利点がある。例えば、第1対象と第2対象の速度が所定以上異なる場合、仮に第1対象と第2対象の位置が近い場合であっても、速度の観点で、的確に第1対象に係るベクトルと第2対象に係るベクトルとを、別の分類に、クラスタリングできる利点がある。 In addition, the clustering unit classifies the length of a vector larger than a predetermined value into the same cluster, or classifies the length of a vector in a predetermined range into the same cluster, and the like. When is used, there is an advantage that feature points related to vectors having the same velocity can be determined to belong to the same object. For example, when the speeds of the first object and the second object are different by a predetermined value or more, even if the positions of the first object and the second object are close to each other, the vector and the first object related to the first object can be accurately measured from the viewpoint of speed. There is an advantage that the vector related to the two objects can be clustered in another classification.
 また、クラスタリング部が、ベクトルの位置及びベクトルの長さの両方を用いる場合、上述の両方の利点を有し、それらが相まって、より高精度にクラスタリングできる利点がある。 Further, when the clustering unit uses both the position of the vector and the length of the vector, it has both the above-mentioned advantages, and there is an advantage that they can be combined to perform clustering with higher accuracy.
 なお、ベクトルは第1画像内の特徴点と第2画像内の特徴点から構成されるため、ベクトルの位置を用いるとは、第1画像内の特徴点の位置及び/又は第2画像内の特徴点の位置を用いることになる。ベクトルの位置は、ベクトルの始点の位置であってもよいし、ベクトルの終点の位置であってもよい。 Since the vector is composed of the feature points in the first image and the feature points in the second image, using the position of the vector means the position of the feature points in the first image and / or the position in the second image. The position of the feature point will be used. The position of the vector may be the position of the start point of the vector or the position of the end point of the vector.
 図5は、クラスタリング部が生成したクラスタの一例であり、クラスタリング部が記憶するテーブルの一例である。本図では、2つの特徴点から構成された一又は複数のベクトルに対して、クラスタが生成されており、クラスタC1及びC2が付されている状態を示す。 FIG. 5 is an example of a cluster generated by the clustering unit, and is an example of a table stored by the clustering unit. In this figure, a cluster is generated for one or a plurality of vectors composed of two feature points, and clusters C1 and C2 are attached.
 一例の一部関係生成部は、特徴点を用いて、一又は複数の第1特徴点と一又は複数の第2特徴点との関係を生成することができる。特に、対象画像の一部情報が、実世界において二つ以上の対象に相当する場合において、かかる二つ以上の対象が別々の動きをする場合においても、かかる別々の動きを特徴点の対応関係に基づくベクトルの方向性によってクラスタで分類できる利点がある。例えば、図6は、t=0乃至3という時間の流れに対して、一部情報に係る特徴点に基づくクラスタIDが、どのように関係づけられているかを模式的に説明した図の一例である。クラスタID01は時刻t=0の画像に基づき、クラスタID02は時刻t=1の画像に基づき、クラスタID03と05は時刻t=2の画像に基づき、クラスタID04と06は時刻t=4の画像に基づくことを示している。ここでは、時刻t=1における画像に係る特徴点に係るクラスタは、クラスタ02と一つのクラスタであったが、時刻t=2における画像に係る特徴点に係るクラスタは、クラスタ03と05の2つのクラスタとされるため、クラスタ02が、クラスタ03とクラスタ05とに分かれている状態を示す。 The partial relationship generation unit of one example can generate a relationship between one or a plurality of first feature points and one or a plurality of second feature points by using the feature points. In particular, when some information of the target image corresponds to two or more objects in the real world, even when the two or more objects make different movements, the correspondence between the feature points is such different movements. There is an advantage that it can be classified by cluster according to the direction of the vector based on. For example, FIG. 6 is an example of a diagram schematically explaining how cluster IDs based on feature points related to some information are related to a time flow of t = 0 to 3. is there. Cluster ID 01 is based on the image at time t = 0, cluster ID 02 is based on the image at time t = 1, cluster ID 03 and 05 are based on the image at time t = 2, and cluster ID 04 and 06 are based on the image at time t = 4. Shows that it is based. Here, the cluster related to the feature point related to the image at time t = 1 was one cluster with the cluster 02, but the cluster related to the feature point related to the image at time t = 2 is 2 of clusters 03 and 05. Since there are two clusters, the cluster 02 is divided into a cluster 03 and a cluster 05.
 ここで、一例の一部情報関係生成部を利用した、一の処理例の流れを説明する。動画を構成する第1画像と第2画像において、まず、一例の一部情報抽出部が、第1画像から第1一部画像を抽出する(ステップ1)。次に、一例の一部情報抽出部が、第2画像から第2一部画像を抽出する(ステップ2)。次に、一部情報関係生成部が、第1一部画像と第2一部画像との関係を生成する(ステップ3)。 Here, the flow of one processing example using the partial information relation generation unit of one example will be described. In the first image and the second image constituting the moving image, first, the partial information extraction unit of one example extracts the first partial image from the first image (step 1). Next, the partial information extraction unit of the example extracts the second partial image from the second image (step 2). Next, the partial information relationship generation unit generates the relationship between the first part image and the second part image (step 3).
 一例のシステムは、
 第1画像に係る第1一部情報と、第2画像に係る第2一部情報と、を抽出する抽出部と、
 前記第1一部情報と、前記第2一部情報との関係を生成する生成部と、
を備えるシステムであってよい。一例のシステムが第1一部情報及び第2一部情報との関係を生成できる場合、一部情報に係る名称などの具体的な情報を、学習済みモデル等を用いて特定しても、特定しなくても、一部情報を追跡できる利点がある。
An example system is
An extraction unit that extracts the first part information related to the first image and the second part information related to the second image.
A generation unit that generates a relationship between the first part information and the second part information,
It may be a system including. When the system of one example can generate the relationship between the first part information and the second part information, it can be specified even if specific information such as a name related to the part information is specified by using a trained model or the like. There is an advantage that some information can be tracked without doing so.
 また、一例の一部情報関係生成部を利用した、他の処理例の流れを説明する。動画を構成する第1画像と第2画像において、まず、一例の一部情報抽出部が、第1画像から第1一部画像を抽出する(ステップ1)。次に、一例の一部情報抽出部が、第2画像から第2一部画像を抽出する(ステップ2)。次に、一例の一部情報関係生成部が、第1一部情報に係る第1一部画像と、第2一部情報に係る第2一部画像と、の関係を生成し、かかる関係が、第1一部画像と第2一部画像とが疑似同一対象を示すものであると判定したら、その関係を記憶する。疑似同一対象を示すものであると判定しない場合、一例の一部情報関係生成部は、第1一部情報に係る特徴点と、第2一部情報に係る特徴点と、について、クラスタを用いて、第1一部情報と第2一部情報との関係を生成し、疑似同一対象であるかどうかを判定する。 In addition, the flow of other processing examples using the partial information relation generation unit of one example will be described. In the first image and the second image constituting the moving image, first, the partial information extraction unit of one example extracts the first partial image from the first image (step 1). Next, the partial information extraction unit of the example extracts the second partial image from the second image (step 2). Next, the partial information relationship generation unit of the example generates a relationship between the first part image related to the first part information and the second part image related to the second part information, and the relationship is generated. If it is determined that the first part image and the second part image indicate pseudo-identical objects, the relationship is stored. When it is not determined that it indicates a pseudo-identical object, the partial information relation generation unit of one example uses a cluster for the feature points related to the first part information and the feature points related to the second part information. Therefore, the relationship between the first part information and the second part information is generated, and it is determined whether or not the objects are pseudo-identical.
 一例のシステムは、
 第1画像に係る第1一部情報と、第2画像に係る第2一部情報と、を抽出する抽出部と、
 前記第1一部情報に係る第1一部画像と、前記第2一部情報に係る第2一部画像との関係を生成する第1生成部と、
 前記第1生成部の処理の後に、前記第1一部情報に係る特徴点と、前記第2一部情報に係る特徴点と、に基づいて、前記第1一部情報と前記第2一部情報との関係を生成する第2生成部と、
を備えるシステムであってよい。画像同士の比較と比べて特徴点を用いた疑似同一対象の判定は、特徴点の抽出及びクラスタリングが必要なため、画像同士の対比を優先して処理し、画像同士の対比において疑似同一対象という関係が判定できない場合など画像同士の処理に不都合がある場合に、クラスタを用いて処理を行うことで、全体としての処理量を減少できる利点がある。
An example system is
An extraction unit that extracts the first part information related to the first image and the second part information related to the second image.
A first generation unit that generates a relationship between a first part image related to the first part information and a second part image related to the second part information.
After the processing of the first generation unit, the first part information and the second part are based on the feature points related to the first part information and the feature points related to the second part information. The second generator that generates the relationship with information,
It may be a system including. Judgment of pseudo-identical objects using feature points compared to comparison between images requires extraction and clustering of feature points, so the comparison between images is prioritized and processed, and the comparison between images is called pseudo-identical objects. When there is an inconvenience in the processing of images such as when the relationship cannot be determined, there is an advantage that the processing amount as a whole can be reduced by performing the processing using a cluster.
  一例のシステムは、
 第1画像に係る第1一部情報と、第2画像に係る第2一部情報と、を抽出する抽出部と、
 前記第1一部情報に係る第1一部画像と、前記第2一部情報に係る第2一部画像との関係を生成する第1生成部と、
 前記第1生成部の処理の後に、前記第1一部情報に係る特徴点と、前記第2一部情報に係る特徴点と、に基づいて、前記第1一部情報と前記第2一部情報との関係を生成する第2生成部と、
 前記第2生成部の処理の後に、前記第1一部画像に係る情報と、前記第2一部画像に係る情報と、を学習済みモデルを用いて生成する情報生成部と、
 前記生成された、前記第1一部画像に係る情報と、前記第2一部画像に係る情報と、に基づいて、前記第1一部情報と前記第2一部情報との関係を生成する第3生成部と、
を備えるシステムであってよい。画像同士の比較、クラスタを用いた処理、のいずれにおいても処理が不都合な場合、計算量は多いが学習済みモデルを用いて各一部情報の情報を生成して関係を生成することで、全体としての処理量を減少しつつ、関係を生成できる利点がある。
An example system is
An extraction unit that extracts the first part information related to the first image and the second part information related to the second image.
A first generation unit that generates a relationship between a first part image related to the first part information and a second part image related to the second part information.
After the processing of the first generation unit, the first part information and the second part are based on the feature points related to the first part information and the feature points related to the second part information. The second generator that generates the relationship with information,
After the processing of the second generation unit, the information generation unit that generates the information related to the first part image and the information related to the second part image by using the trained model.
The relationship between the first part information and the second part information is generated based on the generated information related to the first part image and the information related to the second part image. With the third generator
It may be a system including. If the processing is inconvenient in both the comparison between images and the processing using the cluster, the whole is generated by generating the information of each part of the information using the trained model although the amount of calculation is large. There is an advantage that the relationship can be generated while reducing the amount of processing as.
1.3.位置特定部13
 位置特定部は、一部情報に係る位置を特定する機能を有する。一例の位置特定部は、一一部情報が画像である場合、画像に係る位置を特定してよく、一部情報が特徴点に係る場合、特徴点に係る位置を特定してもよい。ここで、特徴点に係る位置は、特徴点に基づき生成されたベクトルの位置や、ベクトルに基づき生成されたクラスタに係る位置を含んでよい。一例の位置特定部は、一部情報に係る一部の画像と、前記一部の画像に係る位置と、を関連付けて記憶してよい。また、一例の位置特定部は、一部情報に係る特徴点に係る情報と、前記特徴点に係る位置と、を関連付けて記憶してよい。
1.3. Positioning unit 13
The position specifying unit has a function of specifying a position related to some information. The position specifying unit of one example may specify the position related to the image when the partial information is an image, and may specify the position related to the feature point when the partial information is related to the feature point. Here, the position related to the feature point may include the position of the vector generated based on the feature point and the position related to the cluster generated based on the vector. The position specifying unit of one example may store a part of the image related to some information and the position related to the part of the image in association with each other. Further, the position specifying unit of one example may store the information related to the feature point related to some information and the position related to the feature point in association with each other.
 一部の画像に係る位置は、対象画像内の一部の画像の位置を示す情報であればどのようなものであってもよい。例えば、一部の画像を含む四角系等の多角形の画像の中心、重心など、一部の画像を含む画像の中心、重心などであってよい。 The position of a part of the image may be any information as long as it is information indicating the position of a part of the image in the target image. For example, it may be the center of a polygonal image such as a square system including a part of the image, the center of gravity, or the center of the image including a part of the image, the center of gravity, or the like.
 特徴点に係る位置は、前記特徴点に係る情報を用いて、種々の手法により、特定されてよい。例えば、特徴点に基づく一又は複数のベクトルを用いて生成された位置を、特徴点に係る位置として特定されてよい。例えば、前記一又は複数のベクトルに係る座標の中心点の座標や、前記一又は複数のベクトルに係る座標を含む四角形等の多角形の中心、重心、座標等が挙げられるが、これらに限られない。ここで、一又は複数のベクトルは、同一のクラスタに含まれる一又は複数のベクトルであってよい。 The position related to the feature point may be specified by various methods using the information related to the feature point. For example, a position generated by using one or more vectors based on the feature points may be specified as a position related to the feature points. For example, the coordinates of the center point of the coordinates related to the one or more vectors, the center of a polygon such as a quadrangle containing the coordinates related to the one or more vectors, the center of gravity, the coordinates, and the like can be mentioned, but are limited to these. Absent. Here, the one or more vectors may be one or more vectors included in the same cluster.
 前者は、例えば、一のクラスタが、(x11、y11)(x12、y12)のベクトルと、(x21、y21)(x22、y22)のベクトルと、の2つのベクトルから構成されている場合、中心点のx座標をx11、x12、x21、x22の平均値とし、中心点のy座標をy11、y12、y21、y22の平均値としてよい。ここで、平均値は、種々の手法であってよく、相加平均、相乗平均、調和平均、一般化平均などであってよい。 The former is centered when, for example, one cluster is composed of two vectors, a vector of (x11, y11) (x12, y12) and a vector of (x21, y21) (x22, y22). The x-coordinate of the point may be the average value of x11, x12, x21, x22, and the y-coordinate of the center point may be the average value of y11, y12, y21, y22. Here, the average value may be various methods, such as arithmetic mean, geometric mean, harmonic mean, and generalized mean.
 後者は、例えば、一のクラスタが、(x11、y11)(x12、y12)のベクトルと、(x21、y21)(x22、y22)のベクトルと、の2つのベクトルから構成されている場合、四角形の頂点を(MINx、MINy)、(MINx、MAXy)、(MAXx、MINy)(MAXx、MAXy)などと設定してよい。ここで、MINxは、前記四角形の頂点4つに係るx座標の最小値であり、MAXxは、前記四角形の頂点4つに係るx座標の最大値であり、MINyは、前記四角形の頂点4つに係るy座標の最小値であり、ここで、MAXyは、前記四角形の頂点4つに係るy座標の最大値であってよい。 The latter is, for example, a quadrangle when one cluster is composed of two vectors, a vector of (x11, y11) (x12, y12) and a vector of (x21, y21) (x22, y22). The vertices of may be set as (MINx, MINy), (MINx, MAXy), (MAXx, MINy) (MAXx, MAXy) and the like. Here, MINx is the minimum value of the x-coordinate of the four vertices of the quadrangle, MAXx is the maximum value of the x-coordinate of the four vertices of the quadrangle, and MINy is the four vertices of the quadrangle. It is the minimum value of the y-coordinate related to, and here, MAXy may be the maximum value of the y-coordinate related to the four vertices of the quadrangle.
 これらは一例であり、各クラスタの情報を反映したものとして、対象の位置を特定できればよい。 These are just examples, and it is sufficient if the target position can be specified as reflecting the information of each cluster.
1.4.対象数推定部14
 対象数推定部は、一部情報が特徴点に係る情報の場合、一部情報に係る対象の対象数を推定する機能を有する。
1.4. Target number estimation unit 14
The target number estimation unit has a function of estimating the target number of the target related to the partial information when the partial information is the information related to the feature point.
 一例の対象数推定部は、一部情報に係る各クラスタ内に含まれる特徴点の数から、対象の数を決定するルール(本願書類において、「対象数推定ルール」ということもある)を用いて、対象数を決定してよい。 The target number estimation unit in one example uses a rule for determining the number of targets from the number of feature points included in each cluster related to some information (sometimes referred to as "target number estimation rule" in the documents of the present application). The number of targets may be determined.
 例えば、図7は、次のように生成されてよい。すなわち、サンプルとなる一又は複数の画像に対し、画像内の特徴点を抽出し、かかる一又は複数の特徴点に対してクラスタリングし、一のクラスタに含まれる一又は複数の特徴点を特定して特徴点の数を特定する。クラスタは複数検出され得るため、かかる複数の各クラスタについて、含まれる特徴点の数が決定される。また、同じ特徴点の数を持つクラスタ(例えば、特徴点を5つ含むクラスタ)が、複数(例えば、12)ある場合もある。かかる状況において、本図は、横軸を一のクラスタが含む特徴点の数とし、縦軸が各特徴点の数を含むクラスタの数として、プロットしたものである。例えば、横軸の5は、縦軸の12にプロットとなっているが、これは、一のクラスタが含む特徴点の数が5つのものは、12個ある、という状況を示すプロットである。本図では、横軸の5と9と13の部分がピークとなっており、横軸の7と11と15は谷となっている。このようなとき、サンプルとなる画像において、例えば、1のクラスタが含む特徴点の数が1乃至7の場合、対象の数が平均して1であり、1のクラスタが含む特徴点の数が8乃至11の場合、対象の数が平均して2であり、1のクラスタが含む特徴点の数が12乃至15の場合、対象の数が平均し3であるような場合、各特徴点の数に対し、対象の数が設定されてよい。かかる状況を関数Fで説明すると、
F(x)=1  1<=x<=7
F(x)=2  8<=x<=11
F(x)=3  12<=x<=15
などのように設定されてよい。すなわち、クラスタリングによって特定された各クラスタ内に含まれる特徴点の数から、対象の数を推定するルールを設定してよい。
For example, FIG. 7 may be generated as follows. That is, for one or more sample images, feature points in the image are extracted, clustering is performed for the one or more feature points, and one or more feature points included in one cluster are specified. To specify the number of feature points. Since multiple clusters can be detected, the number of feature points included is determined for each of these clusters. Further, there may be a plurality of clusters (for example, 12 clusters including 5 feature points) having the same number of feature points. In such a situation, in this figure, the horizontal axis is the number of feature points included in one cluster, and the vertical axis is the number of clusters including the number of each feature point. For example, 5 on the horizontal axis is plotted on 12 on the vertical axis, which is a plot showing a situation in which 12 feature points are included in one cluster. In this figure, the portions 5 and 9 and 13 on the horizontal axis are peaks, and the portions 7 and 11 and 15 on the horizontal axis are valleys. In such a case, in the sample image, for example, when the number of feature points included in one cluster is 1 to 7, the number of objects is 1 on average, and the number of feature points included in one cluster is 1. In the case of 8 to 11, the number of objects is 2 on average, and when the number of feature points included in one cluster is 12 to 15, the number of objects is 3 on average. The target number may be set for the number. To explain this situation with the function F,
F (x) = 1 1 <= x <= 7
F (x) = 28 <= x <= 11
F (x) = 3 12 <= x <= 15
It may be set as such. That is, a rule for estimating the number of targets may be set from the number of feature points included in each cluster specified by clustering.
 なお、本例において、特徴点の数の5がピークであることから、対象が一つに対して、特徴点の数が5つであることが推定されるが、対象が2である場合に10ではなく9となっており、対象が3である場合に15ではなく13となることがある。これは、単なる倍数である場合もあるが、例えば、対象の重なり合いの程度により、特徴点の数が減少することがあるためである。 In this example, since the number of feature points is 5 which is the peak, it is estimated that the number of feature points is 5 for one target, but when the target is 2. It is 9 instead of 10, and when the target is 3, it may be 13 instead of 15. This is because it may be a mere multiple, but for example, the number of feature points may decrease depending on the degree of overlap of the objects.
 また、上述の対象数推定ルールは、画像が撮像されるカメラに係る情報と関連付けられていてよい。また、一のカメラに係る情報と関連付けられる一の対象数推定ルールと、前記一のカメラに係る情報と異なる他のカメラに係る情報と関連付けられる他の対象数推定ルールと、異なってよい。カメラに応じた対象数推定ルールが設定されることで、精緻なルールを設定できる利点がある。ここで、画像が撮像されるカメラに係る情報は、撮像するカメラが設置される位置、カメラが撮像する時間、撮像するカメラの移動方法、具体的なカメラのIDなどであってよい。カメラが設置される位置は、カメラが設置される施設、カメラが設置される施設内の位置などであってよい。施設内の位置は、天井、壁、などの位置であってよい。これは、画像が撮像されるカメラに係る情報と、特徴点の数と、対象の数とには、一定の関係があると考えられるためである。例えば、百貨店のお総菜売り場内に設置されるカメラによって、対象を人として、特徴点を抽出する場合を考えると、お総菜売り場は複数の売り場とその間の買い物客が移動する移動ルートから構成され、人が通るかかる移動ルートも一定の場所に限定されている。そのような環境下に設置されるカメラは、特定の傾きで特定の方向から撮像するため、かかる場合において対象たる人を撮像すると、特徴点の見え方も限定されるため、かかる撮像物には対象の数と特徴点の数との関係も一定の法則があると考えられるためである。 Further, the above-mentioned target number estimation rule may be associated with information related to the camera on which the image is captured. Further, the one target number estimation rule associated with the information related to one camera and the other target number estimation rule associated with the information related to another camera different from the information related to the one camera may be different. By setting the target number estimation rule according to the camera, there is an advantage that a precise rule can be set. Here, the information related to the camera on which the image is captured may be the position where the camera to be imaged is installed, the time taken by the camera, the moving method of the camera to be imaged, the specific camera ID, and the like. The position where the camera is installed may be a facility where the camera is installed, a position in the facility where the camera is installed, or the like. The position in the facility may be a position such as a ceiling or a wall. This is because it is considered that there is a certain relationship between the information related to the camera on which the image is captured, the number of feature points, and the number of objects. For example, considering the case where a camera installed in a department store's delicatessen section extracts feature points with a target person as a person, the delicatessen section is composed of multiple sections and a travel route in which shoppers move between them. , The travel route that people take is also limited to certain places. A camera installed in such an environment captures images from a specific direction at a specific tilt. Therefore, when an image of a target person is taken in such a case, the appearance of feature points is also limited. This is because the relationship between the number of objects and the number of feature points is considered to have a certain rule.
 また、対象数推定ルールは、対象と関連付けられてよい。また、対象は、人、車両、動物、が挙げられるが、より具体的なものであってもよい。例えば、所定のグループに属する人や、所定のサイズや車種の車両、所定のグループに属する動物、などであってよい。これらは、人、車両、動物、が属するグループが異なることにより、特徴点の抽出に違いが生じうるため、かかる特徴点の検出態様が異なりうる対象に基づいて、対象数推定ルールを設定できる利点がある。所定のグループに属する人としては、スーツ姿の人、カジュアルな姿の人、工場や建築作業場などの作業服姿の人、などが挙げられてよい。これらの具体的な服装を着用する人に対して、対象数推定ルールが決定されることにより、より対象数推定の精度を向上できる利点がある。また、所定のサイズや車種の車両は、一般自動車、二輪車、タクシー、緊急車両、トラック、バスなどのサイズや車種であってよい。これらはサイズが異なるほか、タクシーはルーフ上に特定のマークがあるなど、特徴点の検出に違いがありうるためである。これらの具体的な車両のサイズや車種に応じて、対象数推定ルールが設定されることにより、より対象数推定の精度を向上できる利点がある。また、所定のグループに属する動物としては、例えば、二足歩行、四足歩行、形態の違いなど、動物によってその形態はことなりうるため、動物の具体的な態様に応じて、対象数推定ルールが設定されることにより、より対象数推定の精度を向上できる利点がある。 Also, the target number estimation rule may be associated with the target. The target may be a person, a vehicle, or an animal, but may be more specific. For example, it may be a person belonging to a predetermined group, a vehicle of a predetermined size or vehicle type, an animal belonging to a predetermined group, or the like. These have the advantage that the target number estimation rule can be set based on the target whose detection mode of the feature point may be different because the extraction of the feature point may be different due to the difference in the group to which the person, the vehicle, and the animal belong. There is. The person who belongs to the predetermined group may include a person in a suit, a person in a casual figure, a person in work clothes such as a factory or a construction workshop, and the like. There is an advantage that the accuracy of the target number estimation can be further improved by determining the target number estimation rule for the person who wears these specific clothes. Further, the vehicle of a predetermined size and vehicle type may be a general vehicle, a motorcycle, a taxi, an emergency vehicle, a truck, a bus, or the like. This is because they are different in size, and taxis may have different detection of feature points, such as a specific mark on the roof. By setting the target number estimation rule according to the specific vehicle size and vehicle type, there is an advantage that the accuracy of the target number estimation can be further improved. In addition, as an animal belonging to a predetermined group, the form may differ depending on the animal, such as bipedal walking, quadrupedal walking, and a difference in morphology. Therefore, the target number estimation rule is determined according to the specific aspect of the animal. There is an advantage that the accuracy of target number estimation can be further improved by setting.
 また、対象数推定ルールは、クラスタリングの手法と関連付けられていてよい。クラスタリングの手法により、特徴点に基づくベクトルに対して得られるクラスタが異なりうるため、かかるクラスタリング手法に基づき、対象数推定ルールを設定できる利点がある。 Also, the target number estimation rule may be associated with the clustering method. Since the clusters obtained for the vector based on the feature points may differ depending on the clustering method, there is an advantage that the target number estimation rule can be set based on the clustering method.
 また、対象数推定ルールは、所定のグループと関連付けられてよい。所定のグループは、上述の、所定のグループに含まれるカメラに係る情報、所定のグループに含まれる対象、所定のグループに含まれるクラスタリング手法、及び/又は、これらの組み合わせであってよい。グループ化することにより、個々の違いに応じて対象数推定ルールを設ける負担が減少する利点がある。 In addition, the target number estimation rule may be associated with a predetermined group. The predetermined group may be the above-mentioned information relating to the camera included in the predetermined group, an object included in the predetermined group, a clustering method included in the predetermined group, and / or a combination thereof. Grouping has the advantage of reducing the burden of establishing target number estimation rules according to individual differences.
 かかる対象数推定ルールは、人が具体的な映像を見た上で、特徴点の数と、対象の数との関係を、人為的に決定し、かかる決定された数値が本システムに入力されて、対象数推定ルールとして用いられてもよいし、機械的に決定されてもよい。機械的に決定する手法の場合、人為的な負担が減少する利点がある。 The target number estimation rule artificially determines the relationship between the number of feature points and the number of targets after a person sees a specific image, and the determined numerical value is input to this system. Therefore, it may be used as a rule for estimating the number of objects, or it may be determined mechanically. In the case of the mechanical determination method, there is an advantage that the human burden is reduced.
<入力手段>
 対象数推定ルールを人為的に決定する場合、対象数推定ルール部は、対象数推定のルールに係る情報を、入力する構成を備え、対象数推定ルールに係る情報を取得する構成を備えてよい。対象数推定ルールに係る情報は、対象数推定ルールを決定するために必要な情報であればその態様に制限はなく、例えば、特徴点の数と対象の数とを直接的又は間接的に関連付ける情報であってよい。図8は、かかる情報を取得する画面の一例を示すものである。
<Input means>
When the target number estimation rule is artificially determined, the target number estimation rule unit may include a configuration for inputting information related to the target number estimation rule and a configuration for acquiring information related to the target number estimation rule. .. The information related to the target number estimation rule is not limited in its mode as long as it is information necessary for determining the target number estimation rule. For example, the number of feature points and the number of targets are directly or indirectly associated with each other. It may be information. FIG. 8 shows an example of a screen for acquiring such information.
 本図において、上述の一のクラスタに含まれる特徴点数と、各特徴点数におけるクラスタ数のヒストグラムにおいて、「対象の数1と2の境目」(01)のように、バーを左右に移動するなどによってGUIを用いて、対象数の境目として、対象数推定ルールを入力する構成を備え、対象数推定ルールを取得する構成を備えてよい。かかる手法により、各クラスタと特徴点数の関係を画面によって理解しつつ、利用者は、対象数推定ルールを入力できる利点がある。この場合、対象数推定ルール部は、対象の数の境目に関する情報と、特徴点の数と、を取得してよい。なお、ここではヒストグラムの例を示したが、その他、棒グラフ、折れ線グラフ、円グラフ、帯グラフ、散布図など、種々のグラフであってもよい。また、その入力手段も種々の態様であってよい。また、本図において、上述と連動して又は上述に代えて、直接数値を入力する構成であってもよい。例えば、「対象の数と特徴点数の関係」(02)のように、対象の数と特徴点数の関係について数値を入力するようなものであってもよく、この場合、対象数推定部は、かかる情報を取得してよい。この場合、具体的な数値を入力できる利点がある。 In this figure, in the histogram of the number of feature points included in one of the above-mentioned clusters and the number of clusters in each feature point, the bar is moved left and right as shown in "the boundary between the number of objects 1 and 2" (01). A configuration for inputting a target number estimation rule and a configuration for acquiring a target number estimation rule may be provided as a boundary of the target number by using a GUI. This method has an advantage that the user can input the target number estimation rule while understanding the relationship between each cluster and the feature score on the screen. In this case, the target number estimation rule unit may acquire information on the boundary of the target number and the number of feature points. Although an example of a histogram is shown here, various graphs such as a bar graph, a line graph, a pie graph, a band graph, and a scatter graph may be used. Moreover, the input means may also have various modes. Further, in this figure, a numerical value may be directly input in conjunction with or instead of the above. For example, as in "Relationship between the number of targets and the number of feature points" (02), a numerical value may be input for the relationship between the number of targets and the number of feature points. In this case, the target number estimation unit may be used. Such information may be obtained. In this case, there is an advantage that a specific numerical value can be input.
 また、機械的に対象数推定ルールを決定する手法としては、例えば、機械学習が挙げられる。機械学習は、種々の人工知能技術が用いられてよい。人工知能技術としては、例えば、ニューラルネットワーク、遺伝的プログラミング、機能論理プログラミング、サポートベクターマシン、クラスタリング、回帰、分類、ベイジアンネットワーク、強化学習、表現学習、決定木、k平均法などの機械学習技術が用いられてよい。 In addition, as a method of mechanically determining the target number estimation rule, for example, machine learning can be mentioned. Various artificial intelligence techniques may be used for machine learning. Artificial intelligence technologies include, for example, machine learning technologies such as neural networks, genetic programming, functional logic programming, support vector machines, clustering, regression, classification, Bayesian networks, reinforcement learning, expression learning, decision trees, and k-means clustering. May be used.
 ニューラルネットワークを用いた機械学習技術は、ディープラーニング(深層学習)技術を用いてよい。これは、複数の層を用いて、入力と出力の関係を学習することで、未知の入力に対しても出力を可能とする技術である。学習データにおいて、教師有りと教師なしの手法が存在するが、どちらが適用されてもよい。 As the machine learning technology using the neural network, deep learning (deep learning) technology may be used. This is a technique that enables output even for an unknown input by learning the relationship between input and output using a plurality of layers. There are supervised and unsupervised methods for training data, whichever method may be applied.
 本システムは、かかる機械学習技術を用いて、一の特徴点の数と、対象の数とを入力としてかかる関係を学習してもよいし、特徴点の数を含む画像と、対象の数とを入力としてかかる関係を学習してもよい。 This system may use such machine learning technology to learn the relationship by inputting the number of one feature point and the number of objects, or the image including the number of feature points and the number of objects. You may learn such a relationship by inputting.
 また、機械学習技術と一部重複するが、統計的手法によって、特徴点の数と、対象との数とを用いて、対象数推定ルールを特定してもよい。なお、対象数推定ルールは、撮像された動画の分析以前に決定されている場合、撮像された動画の分析時に決定する必要がないことから、かかる対象数推定ルールの決定に計算量や計算時間が必要な機械学習が必要であっても、動画の分析の計算量や計算時間に影響を与えることはない。そのため、対象数推定ルールの処理に機械学習等によって多大な計算時間や計算量が必要であっても、移行で説明する本処理により動画の分析は、リアルタイムで実行可能であるという利点がある。 Although it partially overlaps with machine learning technology, the target number estimation rule may be specified by using the number of feature points and the number of targets by a statistical method. If the target number estimation rule is determined before the analysis of the captured moving image, it is not necessary to determine it at the time of analyzing the captured moving image. Therefore, the calculation amount and calculation time are used to determine the target number estimation rule. Even if machine learning is required, it does not affect the amount of calculation or calculation time of video analysis. Therefore, even if the processing of the target number estimation rule requires a large amount of calculation time and calculation amount due to machine learning or the like, there is an advantage that the analysis of the moving image can be executed in real time by this processing explained in the migration.
 上述の手法により、人為的または機械的に決定された対象数推定ルールは、種々の手法により、用いられてよい。例えば、かかる対象数推定ルールは、テーブルで用意されてもよいし、関数で用意されてもよいし、また、機械学習されたシステムのまま利用されてもよい。なお、機械学習によって得られた特徴点の数と対象の数との関係は、試験的に特徴点の数を入力することによって得られた対象の数を、上述のようなテーブル又は関数にすることができてよい。 The target number estimation rule artificially or mechanically determined by the above-mentioned method may be used by various methods. For example, the target number estimation rule may be prepared by a table, a function, or may be used as a machine-learned system. The relationship between the number of feature points obtained by machine learning and the number of objects is such that the number of objects obtained by inputting the number of feature points on a trial basis is made into a table or function as described above. You may be able to.
 一例のシステムは、
 第1画像に係る第1一部情報としての一又は複数の第1特徴点、を抽出する抽出部と、
 前記一又は複数の第1特徴点から、対象数推定ルールを適用することで、前記第1一部情報に係る対象の数を推定する対象数推定部と、
を備えるシステムであってよい。
An example system is
An extraction unit that extracts one or more first feature points as the first part information related to the first image, and
A target number estimation unit that estimates the number of targets related to the first part information by applying the target number estimation rule from the one or a plurality of first feature points.
It may be a system including.
 対象数推定部は、一のクラスタについて、前記クラスタ内の特徴点の数に対し、上述の対象数推定ルールを用いて、対象数を推定する機能を有する。一般的に、特徴点は各対象と関連付けられるものであるため、実際の対象に対応して特徴点を分類するクラスタリングの研究が多数されているが、かかる手法は困難を極めていた。これに対し、対象数推定部が、対象数推定ルールを用いる場合、低精度のクラスタリング技術から高精度のクラスタリングのどのようなクラスタリング技術が用いられるとしても、対象数推定ルールによって特徴点の数と対象数の数との関係が判明していることにより、一のクラスタと関連付けられる対象の対象数を前記一のクラスタ内の特徴点の数から合理的に特定できる利点がある。特に、低精度のクラスタリング技術が用いられても、上述の対象数推定ルールによって、一のクラスタ内と関連付けられる対象の対象数を前記一のクラスタ内の特徴点の数から合理的に特定できる利点がある。 The target number estimation unit has a function of estimating the number of targets for one cluster by using the above-mentioned target number estimation rule for the number of feature points in the cluster. In general, since feature points are associated with each object, many clustering studies have been conducted to classify feature points according to actual objects, but such a method has been extremely difficult. On the other hand, when the target number estimation unit uses the target number estimation rule, the number of feature points is determined by the target number estimation rule regardless of the clustering technique from low-precision clustering technology to high-precision clustering. Since the relationship with the number of targets is known, there is an advantage that the number of targets associated with one cluster can be reasonably specified from the number of feature points in the one cluster. In particular, even if a low-precision clustering technique is used, the above-mentioned target number estimation rule has an advantage that the number of target targets associated with one cluster can be reasonably specified from the number of feature points in the one cluster. There is.
 なお、上述の手法によれば、対象数推定ルールによれば対象数を2とされた一のクラスタに係る複数の特徴点について、動画の流れに沿って画像間のベクトルに基づいてクラスタリングをすると、ある時点でクラスタリング手法により、二つ以上のクラスタとされる場合もある。このとき、各クラスタに係る特徴点の数により、各クラスタについて対象数が、対象数推定ルールに基づいて、決定されてよい。 According to the above method, according to the target number estimation rule, clustering is performed on a plurality of feature points related to one cluster having a target number of 2 based on a vector between images along the flow of a moving image. At some point, the clustering technique may result in more than one cluster. At this time, the number of targets for each cluster may be determined based on the target number estimation rule based on the number of feature points related to each cluster.
 また、対象数推定部は、上述のとおり、使用されるクラスタリング技術はどのようなものであってもよいことから、対象数推定部が使用するクラスタリング技術を処理する演算部のクロック数のスマートフォンや軽量のノートパソコンなどのような比較的クロック数の少ないCPUを用いる演算部であったり、対象数推定部が使用するクラスタリング技術の処理にあたり使用する記憶部の記憶量がスマートフォンや軽量のノートパソコンなどのような比較的記憶量が少ないメモリを有する記憶部であったり、及び/又は対象数推定部が使用するクラスタリング技術が他の画像処理や処理負荷の高い他の計算と同時に行うが必要な場合などにおいても、適用できる利点がある。 Further, as described above, the target number estimation unit may use any clustering technology, and therefore, a smartphone having a clock number of the arithmetic unit that processes the clustering technology used by the target number estimation unit or the like. A calculation unit that uses a CPU with a relatively small number of clocks, such as a lightweight notebook computer, or a smartphone, lightweight notebook computer, etc. that has a storage capacity used for processing the clustering technology used by the target number estimation unit. When the storage unit has a memory with a relatively small amount of storage such as, and / or when the clustering technique used by the target number estimation unit needs to be performed at the same time as other image processing or other calculation with a high processing load. There is an advantage that it can be applied even in such cases.
 また、対象数推定部において、計算負担の低いクラスタリング技術を用いる場合、リアルタイムに対象の数を決定できる利点がある。 In addition, when a clustering technique with a low calculation load is used in the target number estimation unit, there is an advantage that the number of targets can be determined in real time.
 特に、特徴点の検出を移動中のベクトルに限定し、かつ、上述のとおり、計算量の低いクラスタリング技術を用いる場合、全体の計算量を減少できる利点がある。 In particular, when the detection of feature points is limited to moving vectors and, as described above, a clustering technique with a low amount of calculation is used, there is an advantage that the total amount of calculation can be reduced.
 また、一例のシステムが、一例の対象数推定部を備えている場合、クラスタが複数の対象と関連付けられている場合であっても、対象数を特定できる利点がある。特に、複数の対象のうちの一が異なる動きを開始した場合であっても、かかる動きを特定できる利点がある。 Further, when the system of one example is provided with the target number estimation unit of one example, there is an advantage that the number of targets can be specified even when the cluster is associated with a plurality of targets. In particular, there is an advantage that such a movement can be identified even when one of a plurality of objects starts a different movement.
 一例のシステムは、
 撮像装置によって撮像された画像から一部情報を抽出する抽出部と、
 前記一部情報に係るクラスタに対し、一部情報に係る対象の数を決定する対象数推定部と、
を備えるシステムであってよい。このような対象数推定部は、一例のシステムが、後述の学習済みモデルなど計算量の多い機能によって情報を生成しうる情報生成部を有しても有しなくとも、対象の数を推定できる利点がある。
An example system is
An extraction unit that extracts some information from the image captured by the image pickup device,
For the cluster related to the partial information, the target number estimation unit that determines the number of targets related to the partial information, and
It may be a system including. Such a target number estimation unit can estimate the number of targets regardless of whether or not the system of one example has an information generation unit capable of generating information by a function with a large amount of calculation such as a learned model described later. There are advantages.
1.4.情報生成部15
 情報生成部は、一部情報に係る情報を生成する機能を有する。一例のシステムが、情報生成部を有する場合、一部情報に係る情報を生成できるため、一部情報に係る情報の正確性が向上する利点がある。情報生成部は、一部情報に係る対象の情報を、種々の手法によって生成してよい。例えば、一例の情報生成部は、一部情報に係る画像を取得し、機械学習によって、対象の情報を生成できるよう構成されてよい。一部情報に係る情報は、一部情報に係る対象の名称や属性等であってよい。例えば、車両等の名称、種類、サイズ、属性であってよい。特に人の場合には、性別、年齢、年齢層、身長、体格、表情などの人を特定の観点で示す情報、また、ファッションアイテムに関する情報についての情報であってよい。
1.4. Information generation unit 15
The information generation unit has a function of generating information related to some information. When the system of one example has an information generation unit, it can generate information related to some information, so that there is an advantage that the accuracy of the information related to some information is improved. The information generation unit may generate target information related to some information by various methods. For example, the information generation unit of one example may be configured so that an image related to a part of the information can be acquired and the target information can be generated by machine learning. The information related to the partial information may be the name or attribute of the object related to the partial information. For example, it may be a name, type, size, attribute of a vehicle or the like. In particular, in the case of a person, it may be information indicating a person such as gender, age, age group, height, physique, facial expression, etc. from a specific viewpoint, or information on fashion items.
 一例の情報生成部の機械学習機能は、画像又は一部の画像と、対象の情報と、の関係を機械学習するものであってよい。また、機械学習済みの情報生成部は、画像又は一部の画像に対し、前記画像又は一部の画像に対応する対象の情報を生成してよい。 The machine learning function of the information generation unit in the example may be for machine learning the relationship between the image or a part of the image and the target information. In addition, the machine-learned information generation unit may generate target information corresponding to the image or a part of the image with respect to the image or a part of the image.
 機械学習は、種々の人工知能技術が用いられてよい。人工知能技術としては、例えば、ニューラルネットワーク、遺伝的プログラミング、機能論理プログラミング、サポートベクターマシン、クラスタリング、回帰、分類、ベイジアンネットワーク、強化学習、表現学習、決定木、k平均法などの機械学習技術が用いられてよい。以下では、ニューラルネットワークを用いる例を使用するが、必ずしもニューラルネットワークに限定されるものではない。 Various artificial intelligence technologies may be used for machine learning. Artificial intelligence technologies include, for example, machine learning technologies such as neural networks, genetic programming, functional logic programming, support vector machines, clustering, regression, classification, Bayesian networks, reinforcement learning, expression learning, decision trees, and k-means clustering. May be used. In the following, an example using a neural network will be used, but the present invention is not necessarily limited to the neural network.
 ニューラルネットワークを用いた機械学習技術は、ディープラーニング(深層学習)技術を用いてよい。これは、複数の層を用いて入力と出力の関係を学習することで、未知の入力に対しても出力を可能とする技術である。教師有りと教師なしの手法が存在するが、どちらが適用されてもよい。 As the machine learning technology using the neural network, deep learning (deep learning) technology may be used. This is a technology that enables output even for unknown inputs by learning the relationship between inputs and outputs using a plurality of layers. There are supervised and unsupervised methods, either of which may be applied.
 ディープラーニング技術における機械学習は、入力と出力の関係の学習において、多数の計算を必要とすることが多い。そこで、特に、撮像装置で撮像された画像全体の一部の画像を、学習済み情報生成部に適用する場合、計算量を抑えられる利点がある。また、撮像装置で撮像された動画において動きのある部分に係る、対象画像内の一部の画像は、動画において動きのない部分に係る画像と比較すると、対象(例えば、人、ペット、車両など)を撮像した画像と重複する領域が多いことから、対象画像内の一部の画像として、撮像装置で撮像された動画において動きのある部分に係るものを学習済み情報生成部に適用した場合、対象の情報を少ない計算量で生成しやすい利点がある。特に、ニューラルネットワークに適したGPUのみならず、ノートパソコン、スマートフォン、などのCPUによっても学習済みモデルの適用により対象の情報を生成できる利点がある。 Machine learning in deep learning technology often requires a large number of calculations in learning the relationship between input and output. Therefore, in particular, when a part of the entire image captured by the image pickup device is applied to the learned information generation unit, there is an advantage that the amount of calculation can be suppressed. In addition, a part of the image in the target image related to the moving part in the moving image captured by the imaging device is compared with the image related to the non-moving part in the moving image, and the target (for example, a person, a pet, a vehicle, etc.) ) Since there are many areas that overlap with the captured image, when a part of the image in the target image related to the moving part of the moving image captured by the imaging device is applied to the learned information generation unit, There is an advantage that the target information can be easily generated with a small amount of calculation. In particular, there is an advantage that the target information can be generated by applying the trained model not only by the GPU suitable for the neural network but also by the CPU of a notebook computer, a smartphone, or the like.
 また、一例のシステムは、
 対象画像から、一部情報を抽出する抽出部と、
 抽出された前記一部情報に係る画像に基づき、前記一部情報に係る対象の情報を生成する生成部と、
を備え、
前記一部情報に係る画像は、一部の画像情報量が、前記対象画像内において前記一部と対応する箇所の画像情報量よりも少ないものである、システムであってよい。例えば、一部情報に係る画像は、元の対象画像と比較すると、対応する箇所において、画像情報量が減少している例があげられる。撮像された元の対象画像と比較して、対応する箇所において、画像情報量が減ることから、学習済みモデルへの適用の負担を減少できる利点がある。
Also, one example system is
An extraction unit that extracts some information from the target image,
A generation unit that generates target information related to the partial information based on the extracted image related to the partial information.
With
The image related to the partial information may be a system in which the amount of partial image information is smaller than the amount of image information of the portion corresponding to the partial in the target image. For example, in the image related to some information, the amount of image information is reduced in the corresponding portion as compared with the original target image. Compared with the original target image captured, the amount of image information is reduced at the corresponding portion, so that there is an advantage that the burden of application to the trained model can be reduced.
 また、一例のシステムに係る前記一部情報に係る画像は、背景領域の一部の画像情報量が、前記対象画像内において前記一部と対応する箇所の画像情報量よりも少ないものである、システムであってよい。例えば、一部情報に係る画像は、元の対象画像と比較すると、背景領域内における対応する箇所において、画像情報量が減少している例があげられる。撮像された元の対象画像と比較して、背景領域内という必要な情報が少ないと思われる対応する箇所において、画像情報量が減ることから、情報の精度に大幅な悪影響を与えることなく、学習済みモデルへの適用の負担を減少できる利点がある。 Further, in the image related to the partial information according to the system of one example, the amount of image information of a part of the background region is smaller than the amount of image information of the portion corresponding to the part in the target image. It can be a system. For example, in the image related to some information, the amount of image information is reduced in the corresponding portion in the background area as compared with the original target image. Since the amount of image information is reduced in the corresponding part of the background area where the required information seems to be less than the original target image captured, learning is performed without significantly adversely affecting the accuracy of the information. There is an advantage that the burden of application to the finished model can be reduced.
また、一例のシステムに係る前記一部情報に係る画像は、一部の画像情報量が、前記一部以外の他の画像情報量よりも少ないものである、システムであってよい。例えば、一部情報に係る画像は、一部が、前記一部以外の他よりも、画像情報量が減少している例があげられる。一部が、他よりも、画像情報量が減ることから、学習済みモデルへの適用の負担を減少できる利点がある。 Further, the image related to the partial information related to the system of one example may be a system in which the amount of partial image information is smaller than the amount of other image information other than the partial information. For example, as for the image related to some information, there is an example in which the amount of image information is partially reduced as compared with other images other than the above part. Some have the advantage that the burden of application to the trained model can be reduced because the amount of image information is smaller than that of others.
 また、一例のシステムに係る前記一部情報に係る画像は、背景領域の一部の画像情報量が、前記背景領域以外の他の画像情報量よりも少ないものである、システムであってよい。例えば、一部情報に係る画像は、背景領域の一部が、前記背景領域以外の他よりも、画像情報量が減少している例があげられる。背景領域内という必要な情報が少ないと思われる一部が、他よりも、画像情報量が減ることから、情報の精度に大幅な悪影響を与えることなく、学習済みモデルへの適用の負担を減少できる利点がある。 Further, the image related to the partial information related to the system of one example may be a system in which the amount of image information of a part of the background area is smaller than the amount of image information other than the background area. For example, as for the image related to some information, there is an example in which a part of the background area has a smaller amount of image information than other than the background area. Some of the information in the background area, which seems to require less information, has less image information than others, reducing the burden of applying it to trained models without significantly adversely affecting the accuracy of the information. There are advantages that can be done.
 また、一例のシステムに係る前記一部情報に係る画像は、少なくとも一部の背景領域の単位当たりの画像情報量を単一にしたものであってよい。前記画像情報量を単一にしたものは、例えば、単一の色であってよい。前記単一の色は、例えば、RGBモデルにおいて、0、255、などであってよい。例えば、背景領域の色が、RGBモデルにおいて、0であったり、255と設定されることにより、学習済みモデルが適用された場合において、背景の種々の情報に惑わされずに、対象となる情報を計算負担が過度に多くなく、処理が可能となる利点がある。なお、この場合、背景領域以外の例えば対象に係る画像情報量は、撮像された元の対象画像内の対象に係る画像情報量と同じ情報を有してもよい。その場合、例えば、対象の色の情報やテクスチャの情報などが一部情報に係る画像内に含まれるため、学習済みモデルが適用されることで、対象の色や対象の種類の情報を生成できるなどの利点もある。 Further, the image related to the partial information related to the system of one example may be a single image information amount per unit of at least a part of the background area. The single image information amount may be, for example, a single color. The single color may be, for example, 0, 255, etc. in the RGB model. For example, if the color of the background area is set to 0 or 255 in the RGB model, when the trained model is applied, the target information can be displayed without being confused by various background information. There is an advantage that processing is possible without excessively large calculation load. In this case, for example, the amount of image information related to the target other than the background area may have the same amount of information as the amount of image information related to the target in the original target image captured. In that case, for example, since the target color information, texture information, and the like are included in the image related to some information, the trained model can be applied to generate the target color and target type information. There are also advantages such as.
 ここで、背景領域は、動画を構成する複数の画像の中からの少なくとも2つの画像の差分から計算される画像であってもよいし、撮像装置によって撮像された背景画像と前記撮像装置によって撮像された画像との差分であってもよい。背景画像は、差分処理を行う前に撮像されたものでよく、例えば、予め撮像された画像でよい。背景領域は、例えば、交通カメラで撮像された画像であれば、車両等以外のものがあたりうるし、商業施設内で撮像された画像であれば利用者以外のものがあたりうる。対象画像内において、背景領域は、車両等の情報を生成する対象以外の場所であるため、背景領域に関する情報量が減少されることで、学習済みモデルが適用される場合に、背景領域の情報に惑わされずに情報が処理可能となり、処理量が劇的に減少するという利点がある。発明者らの調査によれば、使用する画像数が約100分の1ほどまで減少するため、従来学習済みモデルの適用に必要とされていた演算装置以外の演算装置によっても計算可能に至った。なお、背景領域における画像情報量が少ない場合、ニューラルネットワークを学習済みモデルとして適用する場合においては、ニューラルネットワークの構造の簡素化を図ることができる利点もある。 Here, the background area may be an image calculated from the difference between at least two images from a plurality of images constituting the moving image, or the background image captured by the imaging device and the background image captured by the imaging device. It may be a difference from the image. The background image may be an image captured before the difference processing is performed, and may be, for example, an image captured in advance. The background area may be, for example, an image captured by a traffic camera other than a vehicle or the like, or an image captured in a commercial facility other than a user. In the target image, the background area is a place other than the target for generating information such as vehicles. Therefore, when the trained model is applied by reducing the amount of information about the background area, the information of the background area is applied. There is an advantage that information can be processed without being confused and the amount of processing is dramatically reduced. According to the research by the inventors, the number of images used is reduced to about 1/100, so that it is possible to calculate with an arithmetic unit other than the arithmetic unit conventionally required for applying the trained model. .. When the amount of image information in the background region is small and the neural network is applied as a trained model, there is an advantage that the structure of the neural network can be simplified.
 また、一例のシステムに係る前記一部情報に係る画像は、背景領域以外における単位当たりの画像情報量を単一にしたものであってよい。背景領域以外は、例えば、交通カメラで撮像された画像であれば、車両等が推測されるし、商業施設内で撮像された画像であれば利用者が推測される。前記画像情報量を単一にしたものは、上述のように、例えば、単一の色であってよい。前記単一の色は、例えば、RGBモデルにおいて、0、255、などであってよい。例えば、RGBモデルにおいて、背景領域の色を0、背景領域以外の色を255として、2値化された一部情報に係る画像に対して学習済みモデルを適用することにより、背景及び対象内部の種々の情報に惑わされずに、対象の輪郭を主に用いて、計算負担が過度に多くなく、対象の情報を生成できる利点がある。 Further, the image related to the partial information related to the system of one example may have a single image information amount per unit other than the background area. Except for the background area, for example, if the image is captured by a traffic camera, a vehicle or the like is presumed, and if the image is captured in a commercial facility, the user is presumed. As described above, the single image information amount may be, for example, a single color. The single color may be, for example, 0, 255, etc. in the RGB model. For example, in the RGB model, the color of the background area is set to 0, the color other than the background area is set to 255, and the trained model is applied to the image related to the binarized partial information, thereby forming the background and the inside of the object. There is an advantage that the target information can be generated without being confused by various information and mainly using the contour of the target without excessively large calculation load.
 なお、対象の影については、背景領域の一として含ませてもよいし、背景領域以外の対象に含ませてもよいし、別のものとして上述のように画像情報量を単一にしてもよい。例えば、一例のシステムに係る前記一部情報に係る画像は、RGBモデルにおいて、背景領域の色を0、対象を255、対象の影を128などのように、3値化された一部情報に係る画像とし、かかる3値化された一部情報に係る画像に対して学習済みモデルを適用してもよい。この場合も、主に対象の輪郭と影などの情報により、計算負担を過度に多くなく、対象の情報を生成できる利点がある。なお、対象の影を背景領域の一として含ませる場合、例えば、天候によって、影の程度に違いが生じうるが、かかる違いが学習済みモデルの適用に影響を与えない利点がある。 The shadow of the target may be included as one of the background areas, may be included in a target other than the background area, or may have a single image information amount as described above. Good. For example, the image related to the partial information related to the system of one example is converted into ternary partial information such as 0 for the color of the background area, 255 for the target, 128 for the shadow of the target, and the like in the RGB model. The trained model may be applied to the image related to the image and the image related to the partially quantified information. In this case as well, there is an advantage that the target information can be generated without excessively increasing the calculation load mainly by the information such as the outline and shadow of the target. When the shadow of the target is included as one of the background areas, for example, the degree of the shadow may differ depending on the weather, but there is an advantage that such a difference does not affect the application of the trained model.
 また、学習済みモデルは、撮像される場所と関連付けて、選択されて、用いられてもよい。例えば、交通カメラや、施設内のカメラなど、撮像対象が車両等、利用者等と明確である場合がある。そこで、交通カメラで撮像された対象画像に対しては、車両等と名称との関係を機械学習させた学習済みモデルを適用してよい。この場合、交通カメラに設置されたという前提で、得られた情報から車両等に関する情報を計算量の負担なく、生成できる利点がある。同様に、施設内のカメラで撮像された対象画像に対しては、利用者等と名称との関係を機械学習させた学習済みモデルを適用してよい。この場合、施設内に設置されたという前提で、得られた情報から利用者等に関する情報を計算量の負担なく、生成できる利点がある。これらにおいて、上述の背景領域の画像情報量を減少させたり、背景領域以外の画像情報量を減少させた場合、学習済みモデルの適用にあたり、計算量を減少させることができる利点がある。なお、この場合、学習済みモデルは、設置される撮像装置と関連付けて記憶されてよく、一例のシステムは、対象画像の撮像された撮像装置と関連付けられて適用される学習済みモデルを選択できる構成とされてよい。また、撮像装置を、上述のとおり、交通カメラ、施設内カメラ、あるいは後述する種々の用途に応じたグループに分けて、かかるグループ毎の学習済みモデルを備える又は接続できるように構成され、情報を生成する一部情報に係る撮像装置に基づいて、学習済みモデルが選択できるよう構成されてよい。 Further, the trained model may be selected and used in association with the place where the image is taken. For example, there are cases where the imaging target such as a traffic camera or a camera in a facility is clearly defined as a vehicle or a user. Therefore, a trained model in which the relationship between the vehicle or the like and the name is machine-learned may be applied to the target image captured by the traffic camera. In this case, there is an advantage that information on the vehicle or the like can be generated from the obtained information on the premise that the vehicle is installed in the traffic camera without burdening the calculation amount. Similarly, a trained model in which the relationship between the user or the like and the name is machine-learned may be applied to the target image captured by the camera in the facility. In this case, there is an advantage that information on users and the like can be generated from the obtained information on the premise that the information is installed in the facility without burdening the calculation amount. In these cases, when the amount of image information in the background area is reduced or the amount of image information other than the background area is reduced, there is an advantage that the amount of calculation can be reduced in applying the trained model. In this case, the trained model may be stored in association with the installed image pickup device, and the system of one example has a configuration in which the trained model to be applied in association with the image pickup device in which the target image is captured can be selected. May be. Further, as described above, the imaging device is divided into a traffic camera, an in-facility camera, or a group according to various uses described later, and the trained model for each group is provided or can be connected to provide information. The trained model may be configured to be selectable based on the imaging device related to the generated partial information.
 同様に、一例の情報生成部は、画像又は一部の画像に基づいて、前記画像又は一部の画像に対応する対象の情報を生成するよう、コンピュータを機能させるための学習済みモデルを含んでよい。かかる学習済みモデルは、人工知能ソフトウェアの一部であるプログラムによって実装されてよい。 Similarly, an example information generator includes a trained model for functioning a computer to generate information of interest corresponding to the image or part of the image based on the image or part of the image. Good. Such trained models may be implemented by programs that are part of artificial intelligence software.
 上述の機械学習に係る機能は、一例の情報生成部が備える機能であってもよいし、一例のシステムの外部の情報処理装置内にあって一例のシステムから問い合わせるものであってもよい。また、一例の情報生成部は、機械学習に係る機能を備える外部の情報処理装置に対して、画像又は一部の画像を伝達する伝達部と、前記画像又は一部の画像に対応する対象の情報を取得する取得部と、を備えてよい。また、一例の情報生成部は、かかる画像又は一部の画像と、前記取得された対応する対象の情報とを、関連付けて記憶してよい。 The above-mentioned function related to machine learning may be a function provided in the information generation unit of the example, or may be inquired from the system of the example in an information processing device outside the system of the example. Further, the information generation unit of an example includes a transmission unit that transmits an image or a part of an image to an external information processing device having a function related to machine learning, and a target corresponding to the image or a part of the image. It may be provided with an acquisition unit for acquiring information. Further, the information generation unit of the example may store such an image or a part of the image in association with the acquired information of the target object.
 一例のシステムは、
 撮像装置によって撮像された画像から一部情報を抽出する抽出部と、
 前記一部情報に係る画像に対し、一部情報に係る対象の情報を生成する情報生成部と、
を備えるシステムであってよい。
An example system is
An extraction unit that extracts some information from the image captured by the image pickup device,
An information generation unit that generates target information related to partial information with respect to the image related to the partial information.
It may be a system including.
1.6.境界部16
 境界部は、境界を管理する機能を有する。境界は、一定の面積の少なくとも一部の面積を分けるものであればよい。面積を分けるものは、線分や円弧などの線であってもよいし、円や多角形などでもよい。境界によって、閉じられた領域が生成されてもよいし、一部分の線など、閉じたものでなくともよい。
1.6. Boundary 16
The boundary portion has a function of managing the boundary. The boundary may be any one that divides at least a part of a certain area. What divides the area may be a line segment, an arc, or the like, or a circle, a polygon, or the like. Boundaries may generate closed areas or may not be closed, such as partial lines.
 境界部は、境界を、画像に対して、設定してよい。境界部は、一の画像に対して、一又は複数の境界を関連付けてよく、それらを記憶してよい。境界部は、境界を、関数として記憶してもよいし、頂点とパスからなるグラフとして記憶してもよい。 For the boundary part, the boundary may be set for the image. Boundaries may associate one or more boundaries with an image and may store them. The boundary portion may store the boundary as a function or as a graph consisting of vertices and paths.
 図9は、境界の一例を示すものである。境界は、円(01)でもよいし、四角形のような多角形(02)でもよいし、画像の縁や画像を表示する画面の縁などと共に閉じた領域の一部である境界(03)でもよいし、領域として閉じていない単なる線分(04)でもよいし、円弧(05)でもよいし、その他規則性のない線などでもよい。境界をグラフの頂点とパスと考えると、境界は、閉道(closed path)であってもよいし、閉道でなくともよい。なお、境界は、記憶部において、いわゆるグラフによって記憶されてよく、行列、リスト、構造体、等の静的メモリであってもよいし、動的メモリであってもよく、これらは動的コンパイル又は静的コンパイルなどの実行処理上の手段に応じて選択されてよい。前者は、実行時に入手可能な情報を用いてパフォーマンスの向上ができる利点があり、後者は既知情報を用いた範囲でのパフォーマンス向上が可能な利点がある。また、グラフは、有向グラフ又は無向グラフであってよい。 FIG. 9 shows an example of the boundary. The boundary may be a circle (01), a polygon such as a quadrangle (02), or a boundary (03) that is a part of a closed area together with the edge of the image or the edge of the screen on which the image is displayed. It may be a simple line segment (04) that is not closed as an area, an arc (05), or other non-regular lines. Considering the boundary as the vertices and paths of the graph, the boundary may or may not be a closed path. The boundary may be stored by a so-called graph in the storage unit, and may be a static memory such as a matrix, a list, a structure, or a dynamic memory, and these may be dynamically compiled. Alternatively, it may be selected according to the means for execution processing such as static compilation. The former has the advantage that performance can be improved by using information available at runtime, and the latter has the advantage that performance can be improved within the range using known information. Further, the graph may be a directed graph or an undirected graph.
 また、一例の統計処理部は、一の境界について、一の方向を関連付けてよい。境界は、一部情報の位置との交差を検知する役割を有するため、境界に係る方向は、境界に対して、一部情報がどちらの側から他の側に移動したのか、を判定する情報として使用してよい。 Further, the statistical processing unit of one example may associate one direction with respect to one boundary. Since the boundary has a role of detecting the intersection with the position of some information, the direction related to the boundary is information for determining from which side the partial information has moved to the other side with respect to the boundary. May be used as.
 境界は、閉道でない場合でも、一部情報の移動を検知できる場合がある利点がある。例えば、画像が店の売り場を撮像したものである場合、棚や物が設置された箇所を人は通ることができないため、かかる棚や物以外の場所に境界が設定されれば、かかる境界と一部情報に係る位置との交差により、一部情報の動きを検知できるためである。但し、境界が、閉道でない場合において、必ずしも画像との関係でかかる対象の通り道を除外する場合に限らず、閉道は、利用者が検知したい一部情報の任意の移動箇所に設定されてよい。 The boundary has the advantage that it may be possible to detect the movement of some information even if the road is not closed. For example, if the image is an image of the sales floor of a store, people cannot pass through the places where shelves and things are installed, so if a boundary is set in a place other than such shelves and things, it will be called such a boundary. This is because the movement of some information can be detected by the intersection with the position related to some information. However, when the boundary is not a closed road, it is not always the case that the target path is excluded in relation to the image, and the closed road is set to an arbitrary movement point of some information that the user wants to detect. Good.
 また、境界部は、利用者が境界を入力できる機能を提供してよい。境界部がかかる機能を有する場合、利用者は、境界を入力できる利点がある。また、境界部は、境界の入力を支援する画面を表示する機能を有してよい。境界部がかかる機能を有する場合、利用者は、境界の入力に関する情報を理解できる利点がある。また、境界部は、利用者によって入力された境界に係る情報を、取得する機能を有してよい。境界部がかかる機能を有する場合、利用者が入力した境界に係る情報を、境界部が利用できる利点がある。 In addition, the boundary part may provide a function that allows the user to input the boundary. If the user has such a function that the boundary portion is applied, the user has an advantage that the boundary portion can be input. Further, the boundary portion may have a function of displaying a screen that supports input of the boundary. Having such a function of the boundary has an advantage that the user can understand the information regarding the input of the boundary. In addition, the boundary portion may have a function of acquiring information related to the boundary input by the user. When the boundary portion has such a function, there is an advantage that the boundary portion can use the information related to the boundary input by the user.
 図10は、境界部が提供する、利用者が境界を入力できる画面表示の一例である。境界自体は、マウスやポインター等の指示装置を用いて、グラフィカルに設定されてよい。複数の境界を選択して、領域を設定(1003)することも可能とされてよく、また、境界の選択を外すことも可能であってよい(1007)。画面上の境界を個々で削除可能であってもよいし(1008)全て削除も可能であってよい(1009)。また、画像の名称(1001)や領域の名称(1002)が表示可能とされている。また、設定された境界は記憶可能であり(1005)、種々の画面に対して境界を設定できるよう、次々と境界を設定可能とされている(1006)。 FIG. 10 is an example of a screen display provided by the boundary portion where the user can input the boundary. The boundaries themselves may be set graphically using a pointing device such as a mouse or pointer. It may be possible to select a plurality of boundaries to set the region (1003), or it may be possible to deselect the boundaries (1007). Boundaries on the screen may be deleted individually (1008) or all may be deleted (1009). Further, the name of the image (1001) and the name of the area (1002) can be displayed. Further, the set boundary is memorable (1005), and the boundary can be set one after another so that the boundary can be set for various screens (1006).
 また、一例のシステムは、画像内の一のクラスタに対して推定された対象の数を用いて、対象の位置を特定する特定部を備えたシステムであってよい。 Further, the system of one example may be a system provided with a specific unit for specifying the position of the target by using the estimated number of targets for one cluster in the image.
 また、一例のシステムは、一例のシステムは、画像内の一のクラスタに対して推定された対象の数を用いて、対象の位置を特定する特定部と、統計処理部と、を備えたシステムであってよい。 Further, the system of one example is a system provided with a specific unit for specifying the position of the object and a statistical processing unit using the estimated number of objects for one cluster in the image. It may be.
1.7.統計処理部17
 統計処理部は、統計情報を生成する機能を有する。統計情報は、一部情報を直接または間接的に用いて生成された情報である。統計情報自体又は統計情報を生成するために必要な情報は、記憶装置に記憶されてよい(以下、「統計情報データベース」ということもある)。ここで、データベースとの用語は、データの集まりを意味する程度のものであり、大規模なデータの格納機能やデータの迅速なアクセス機能等は、有してもよいし、有しなくともよい。かかる統計情報データベースは、一例のシステムが備えてもよいし、一例のシステムの外部の情報処理装置内にあって一例のシステムから問い合わせるものであってもよい。一例のシステムは、統計情報データベースに対し、情報の問い合わせをする通信部、統計情報データベースから取得した情報を取得する取得部を有してよい。
1.7. Statistical processing unit 17
The statistical processing unit has a function of generating statistical information. Statistical information is information generated by using some information directly or indirectly. The statistical information itself or the information necessary for generating the statistical information may be stored in a storage device (hereinafter, may be referred to as a "statistical information database"). Here, the term database only means a collection of data, and may or may not have a large-scale data storage function, a rapid data access function, and the like. .. Such a statistical information database may be provided by an example system, or may be inquired from an example system in an information processing device outside the example system. An example system may have a communication unit for inquiring information from the statistical information database and an acquisition unit for acquiring information acquired from the statistical information database.
 統計情報は、一部情報に係る位置に係る情報、又は、境界に係る情報、であってよい。また、かかる統計情報は、一部情報に基づく関係が疑似同一対象である場合に生成されてよい。 Statistical information may be information related to the position related to some information or information related to the boundary. Further, such statistical information may be generated when the relationship based on some information is a pseudo-identical object.
 一部情報に係る位置に係る情報は、一部情報に係る位置の移動方向であってもよいし、一部情報に係る位置の動きであってもよい。 The information related to the position related to the partial information may be the movement direction of the position related to the partial information, or may be the movement of the position related to the partial information.
 例えば、一部情報に係る位置の移動方向は、一部情報に係る位置の移動方向を生成するものであってよい。移動方向は、また、移動方向は、0乃至360度や0乃至2πなどの数値によって特定されてもよいし、例えば東西南北や上下左右など所定の範囲と関連付けられた値によって特定されてもよい。また、所定の範囲の幅の程度も種々のものであってよい。かかる所定の範囲により特定された場合であって、かかる情報が表示された場合に、より閲覧者に分かりやすいという利点がある。一部情報の移動方向は、一部情報に係る画像に係る位置のベクトルによって生成されてもよいし、一部情報に係る特徴点のベクトルの情報を用いて生成されてもよい。また、一部情報に係る位置の移動方向は、複数の一部情報を用いて、生成されてもよい。この場合、複数の一部情報に係る位置に基づくベクトルの平均などが用いられてよい。 For example, the moving direction of the position related to the partial information may generate the moving direction of the position related to the partial information. The moving direction may be specified by a numerical value such as 0 to 360 degrees or 0 to 2π, or may be specified by a value associated with a predetermined range such as north, south, east, west, up, down, left, or right. .. Further, the degree of width of the predetermined range may be various. There is an advantage that it is easier for the viewer to understand when the information is displayed even when the information is specified by the predetermined range. The moving direction of the partial information may be generated by the vector of the position related to the image related to the partial information, or may be generated by using the information of the vector of the feature points related to the partial information. Further, the moving direction of the position related to the partial information may be generated by using a plurality of partial information. In this case, an average of vectors based on positions related to a plurality of partial information may be used.
 一例のシステムは、第1一部情報に係る位置と第2一部情報に係る位置から、移動方向を生成する統計処理部を有してよい。 An example system may have a statistical processing unit that generates a moving direction from a position related to the first part information and a position related to the second part information.
 一例のシステムは、
 動画を構成する第1画像と第2画像において、
 前記第1画像から第1一部情報と、前記第2画像から第2一部情報と、を抽出する一部情報抽出部と、
 前記第1一部情報に係る位置と、前記第2一部情報に係る位置と、を特定する位置特定部と、
 前記第1一部情報に係る位置と、前記第2一部情報に係る位置と、から移動方向を生成する統計処理部と、
を備えてよい。一例のシステムがかかる構成を備える場合、移動方向の情報を簡易に生成できる利点がある。なお、かかる場合、一例のシステムは、かかる一部情報の移動方向の生成において、対象数の推定や境界の機能はなくても可能である。他方、一例のシステムが対象数の推定機能を用いたり、一部情報の数を用いる場合、移動方向に加えて、対象数及び/又は一部情報の数の情報も生成できるため、表示された場合、閲覧者は、どの程度の数の対象による移動であるのか、を理解できる利点がある。また、一例のシステムは、境界の機能を用いて、移動方向の生成と表示のタイミングを規定してもよい。例えば、境界が設定された位置において、移動方向を生成し表示するよう構成されることで、閲覧者は、特定の位置において表示できる利点がある。
An example system is
In the first image and the second image that make up the moving image,
A partial information extraction unit that extracts the first part information from the first image and the second part information from the second image.
A position specifying unit that specifies a position related to the first part information and a position related to the second part information.
A statistical processing unit that generates a moving direction from a position related to the first part information and a position related to the second part information.
May be equipped. When one system has such a configuration, there is an advantage that information on the moving direction can be easily generated. In such a case, the system of the example is possible without the function of estimating the number of targets and the function of the boundary in generating the moving direction of such a part of information. On the other hand, when the system of one example uses the target number estimation function or uses the number of partial information, it is displayed because it can generate information on the number of targets and / or the number of partial information in addition to the moving direction. In this case, the viewer has the advantage of being able to understand how many targets the movement is. In addition, the system of one example may use the function of the boundary to specify the timing of generation and display of the moving direction. For example, there is an advantage that the viewer can display at a specific position by being configured to generate and display the moving direction at the position where the boundary is set.
 図11は、一部情報に係る位置の移動方向を表示する一例であり、表示1101は情報への移動方向を示し、表示1102は右方向への移動方向を示すものである。 FIG. 11 is an example of displaying the moving direction of the position related to some information, the display 1101 shows the moving direction to the information, and the display 1102 shows the moving direction to the right.
 また、一部情報に係る位置の動きは、撮像装置において撮像された複数の画像において、一の画像内に対して一の一部情報が用いられ、前記複数の画像に対して対応する複数の一部情報について、前記複数の一部情報に係る各位置に基づいて生成されてよい。また、一部情報に係る位置の動きは、一の画像内に対して複数の一部情報が用いられて、生成されてもよい。一例の統計処理部は、統計情報として、一部情報と一部情報に係る位置とを関連付けた情報を記憶してよい。 Further, as for the movement of the position related to the partial information, in the plurality of images captured by the imaging device, one partial information is used for one image, and a plurality of corresponding images are used for the plurality of images. Partial information may be generated based on each position related to the plurality of partial information. Further, the movement of the position related to the partial information may be generated by using a plurality of partial information in one image. As statistical information, the statistical processing unit of one example may store information in which some information is associated with a position related to some information.
 一例のシステムは、第1一部情報に係る位置と第2一部情報に係る位置から、一部情報の動きを生成する統計処理部を有してよい。 An example system may have a statistical processing unit that generates a movement of a part of information from a position related to the first part information and a position related to the second part information.
 一例のシステムは、
 動画を構成する第1画像と第2画像において、
 前記第1画像から第1一部情報と、前記第2画像から第2一部情報と、を抽出する一部情報抽出部と、
 前記第1一部情報に係る位置と、前記第2一部情報に係る位置と、を特定する位置特定部と、
 前記第1一部情報に係る位置と、前記第2一部情報に係る位置と、から動きを生成する統計処理部と、
を備えてよい。一例のシステムがかかる構成を備える場合、一部情報の動きの情報を簡易に生成できる利点がある。また、動きは、一部情報の動きの所定期間の動きを表示してもよいし、所定の距離の動きを表示してもよい。この場合、閲覧者は、一定の過去の動きを理解できる利点がある。なお、かかる場合、一例のシステムは、かかる一部情報の動きの生成において、対象数の推定や境界の機能はなくても可能である。他方、一例のシステムが対象数の推定機能を用いたり、一部情報の数を用いる場合、動きに加えて、対象数及び/又は一部情報の数の情報も生成できるため、表示された場合、閲覧者は、どの程度の数の対象による動きであるのか、を理解できる利点がある。また、一例のシステムは、境界の機能を用いて、動きの生成と表示のタイミングを規定してもよい。例えば、境界が設定された位置において、動きを生成し表示するよう構成されることで、閲覧者は、特定の位置において表示できる利点がある。
An example system is
In the first image and the second image that make up the moving image,
A partial information extraction unit that extracts the first part information from the first image and the second part information from the second image.
A position specifying unit that specifies a position related to the first part information and a position related to the second part information.
A statistical processing unit that generates motion from the position related to the first part information and the position related to the second part information.
May be equipped. When the system of one example has such a configuration, there is an advantage that information on the movement of some information can be easily generated. Further, as the movement, the movement of a part of the information for a predetermined period may be displayed, or the movement of a predetermined distance may be displayed. In this case, the viewer has the advantage of being able to understand certain past movements. In such a case, the system of the example is possible without the function of estimating the number of objects and the boundary in generating the movement of such a part of information. On the other hand, when the system of one example uses the target number estimation function or uses the number of partial information, it can generate information on the number of targets and / or the number of partial information in addition to the movement, so that it is displayed. , The viewer has the advantage of being able to understand how many objects are moving. The example system may also use the boundary function to specify the timing of motion generation and display. For example, by being configured to generate and display a movement at a position where a boundary is set, there is an advantage that the viewer can display at a specific position.
 図12は、一部情報に係る位置の動きを表示する一例であり、表示1201は一部情報の曲線の動きを示した例であり、表示1202は一部情報の直線的な動きを示した例である。 FIG. 12 is an example of displaying the movement of the position related to the partial information, the display 1201 is an example showing the movement of the curve of the partial information, and the display 1202 shows the linear movement of the partial information. This is an example.
 また、一例のシステムは、一部情報に係る位置の移動方向と動きとを表示してもよい。この場合、閲覧者は、過去の動きと現在の移動方向の両方を理解できる利点がある。 Further, the system of one example may display the moving direction and movement of the position related to some information. In this case, the viewer has the advantage of being able to understand both past movements and current movement directions.
 また、境界に係る情報は、境界との関係で生成された情報であればよく、例えば、一の境界と交差した一部情報の位置に係る数や、一の境界と交差した一部情報に係る対象の数が用いられた数、などであってよい。一例のシステムがかかる構成を備えた場合、一部情報の位置と境界との関係を示す情報を生成できる利点がある。 Further, the information related to the boundary may be information generated in relation to the boundary, for example, the number related to the position of the partial information intersecting the one boundary or the partial information intersecting the one boundary. The number of such objects may be the number used, and so on. When the system of one example has such a configuration, there is an advantage that information indicating the relationship between the position and the boundary of some information can be generated.
 また、一の境界と交差した一部情報の位置に係る数は、例えば、一の境界と交差した一部情報の数の合計数、所定期間における前記一部情報の数の合計数、所定期間単位での前記一部情報の数の平均値、前記合計数を前記境界に係る数値で除した密度、などであってよい。ここで、境界に係る数値は、例えば、境界と関連付けられた面積や境界と関連付けられた賃料などであってよい。これらは、閉じられた境界であればその閉じられた領域の面積や賃料であってよく、閉じられていない境界であれば、その境界と関連付けられる仮想的な面積や賃料であってよい。一例のシステムがかかる構成を備えた場合、一部情報の位置に基づいた統計的情報を生成できる利点がある。特に、合計数、平均値などにより、一部情報の量的な情報を生成できる利点がある。また、密度の場合、境界と関連付けられた面積や賃料などにより、境界と関連付けられたかかる面積の利用効率性や、賃料との関係における対価性などを生成できる利点がある。 The number of the partial information intersecting the one boundary is, for example, the total number of the partial information intersecting the one boundary, the total number of the partial information in the predetermined period, and the predetermined period. It may be an average value of the number of the partial information in a unit, a density obtained by dividing the total number by a numerical value related to the boundary, or the like. Here, the numerical value related to the boundary may be, for example, the area associated with the boundary or the rent associated with the boundary. These may be the area or rent of the closed area if it is a closed boundary, or the virtual area or rent associated with that boundary if it is not a closed boundary. If one system has such a configuration, it has the advantage of being able to generate statistical information based on the location of some information. In particular, there is an advantage that quantitative information of some information can be generated by the total number, the average value, and the like. Further, in the case of density, there is an advantage that the utilization efficiency of such an area associated with the boundary and the consideration in relation to the rent can be generated by the area associated with the boundary and the rent.
 一の境界と交差した一部情報に係る対象の数が用いられた数は、例えば、前記一の境界と交差した一部情報に係る対象の数の合計数、所定期間における前記一部情報に係る対象の数の合計数、所定期間単位での前記一部情報に係る対象の数の平均値、前記合計数を前記境界に係る数値で除した密度、などであってよい。一例のシステムがかかる構成を備えた場合、一部情報に係る対象の数も要素に入れた統計的情報を生成できる利点がある。 The number used for the number of objects related to the partial information intersecting the one boundary is, for example, the total number of the objects related to the partial information intersecting the one boundary, the partial information in a predetermined period. It may be the total number of such objects, the average value of the number of objects related to the partial information in a predetermined period unit, the density obtained by dividing the total number by the numerical value related to the boundary, or the like. When one system has such a configuration, there is an advantage that statistical information including the number of objects related to some information can be generated.
 また、一の境界が、一の特定の場所、一の特定の商品の場所、一の特定の広告の場所(以下、「境界場所等」ということもある。)、等と関連付けられている場合、境界に係る情報は、特定の境界場所等と関連付けられた前記一の境界等と交差した一部情報の数の合計数、所定期間の合計数、所定期間単位での平均値、などであってよい。一例のシステムがかかる情報を生成した場合、境界場所等の要素を入れた統計的情報を生成できる利点がある。ここで、かかる境界場所等が用いられる場合において、一の境界と境界場所等との関係を関係づけて記憶装置に記憶されてよい(以下、「境界場所データベース」ということもある)。ここで、データベースとの用語は、データの集まりを意味する程度のものであり、大規模なデータの格納機能やデータの迅速なアクセス機能等は、有してもよいし、有しなくともよい。かかる境界場所データベースは、一例のシステムが備えてもよいし、一例のシステムの外部の情報処理装置内にあって一例のシステムから問い合わせるものであってもよい。一例のシステムは、境界場所データベースに対し、情報の問い合わせをする通信部、境界場所データベースから取得した情報を取得する取得部を有してよい。 In addition, when one boundary is associated with one specific place, one specific product location, one specific advertisement location (hereinafter, also referred to as "boundary location, etc."), etc. , The information related to the boundary is the total number of some information that intersects the one boundary, etc. associated with the specific boundary place, etc., the total number of the predetermined period, the average value in the predetermined period unit, and the like. You can. When an example system generates such information, it has the advantage of being able to generate statistical information that includes elements such as boundary locations. Here, when such a boundary place or the like is used, the relationship between one boundary and the boundary place or the like may be related and stored in the storage device (hereinafter, may be referred to as a “boundary place database”). Here, the term database only means a collection of data, and may or may not have a large-scale data storage function, a rapid data access function, and the like. .. Such a boundary location database may be provided by an example system, or may be inquired from an example system in an information processing device outside the example system. An example system may have a communication unit for inquiring information from the boundary location database and an acquisition unit for acquiring information acquired from the boundary location database.
 また、境界に係る情報は、一の特定の場所として、店舗を含んでよい。この場合、境界に係る情報は、一部情報に係る対象としての人が、一の店舗に訪問した場合における訪問数を含んでよい。 In addition, the information related to the boundary may include a store as one specific place. In this case, the information related to the boundary may include the number of visits when a person who is the target of some information visits one store.
 また、境界に係る情報は、後述する通り、種々のランキングであってもよい。 Further, the information related to the boundary may be various rankings as described later.
 また、一例の統計処理部は、一部情報関係生成部、位置特定部、対象数推定部、情報生成部、境界部、及び/又は、後述の追跡部、を用いて統計情報を生成してよい。 In addition, the statistical processing unit of one example generates statistical information by using a partial information relation generation unit, a position identification unit, a target number estimation unit, an information generation unit, a boundary unit, and / or a tracking unit described later. Good.
1.8.追跡部18
 一例のシステムは、追跡部を有してよい。追跡部は、一の撮像装置内で撮像された動画内の一部情報の移動を追跡する機能のみ、一の撮像装置で撮像された動画内から他の撮像装置で撮像された動画内への対象の移動を追跡する機能のみ、又は、これらの両方の機能を有してよい。また、追跡部は、一部情報に係る位置の移動を追跡してよい。
1.8. Tracking unit 18
An example system may have a tracking unit. The tracking unit has only the function of tracking the movement of some information in the moving image captured in one imaging device, from the moving image captured by one imaging device to the moving image captured by another imaging device. It may have only the function of tracking the movement of the object, or both of these functions. In addition, the tracking unit may track the movement of the position related to some information.
 追跡部は、一部情報に係る対象の疑似同一対象であるかを判定する機能を有してよい。追跡部は、かかる疑似同一対象であるかを、一部情報に係るクラスタに係る特徴点、撮像装置の撮像範囲の位置関係、及び/又は、かかる撮像の時間情報、を用いて判定してよい。また、追跡部は、上述の一部情報関係生成部を用いて、疑似同一対象であるかを判定してよい。追跡部は、一部情報に係る対象が同一であると判定された場合に、一部情報に係る対象を追跡してよい。 The tracking unit may have a function of determining whether or not the target related to some information is a pseudo-identical target. The tracking unit may determine whether or not it is such a pseudo-identical object by using the feature points related to the cluster related to some information, the positional relationship of the imaging range of the imaging apparatus, and / or the time information of such imaging. .. In addition, the tracking unit may determine whether or not they are pseudo-identical objects by using the above-mentioned partial information relation generation unit. The tracking unit may track the target related to the partial information when it is determined that the target related to the partial information is the same.
 追跡部が、対象に係る特徴点を用いる場合、例えば、一又は複数の特徴点の位置関係を用いて、対象の疑似同一対象であるかを判定してよい。これは、一の対象に係る一又は複数の特徴点の位置関係は、一の撮像装置による動画に係る画像内の位置と、他の撮像装置による動画に係る画像内の位置と、は変化がない、又は、変化が一定であるとの推定に基づく。なお、変化の一定性は、一の撮像装置の設置角度や位置と、他の撮像装置の設置角度や位置との違いにより生じうるものであるが、かかる位置関係のルールを予め作成しておくことにより、かかる疑似同一対象であるかの判定の情報としてよい。なお、かかる位置関係のルールを作成することが困難である場合、かかる特徴点の位置関係を用いないものとしてもよい。 When the tracking unit uses the feature points related to the target, for example, the positional relationship of one or a plurality of feature points may be used to determine whether the target is a pseudo-identical target. This is because the positional relationship of one or more feature points related to one object changes between the position in the image related to the moving image by one imaging device and the position in the image related to the moving image by another imaging device. Based on estimates that there is no or the change is constant. The constantness of change can occur due to the difference between the installation angle and position of one imaging device and the installation angle and position of another imaging device, but a rule of such positional relationship is created in advance. As a result, it may be used as information for determining whether or not the object is the same pseudo-identical object. If it is difficult to create a rule of such a positional relationship, the positional relationship of the feature points may not be used.
 また、追跡部が、撮像装置の撮像範囲の位置関係を用いる場合、例えば、一の撮像装置の撮像範囲と、他の撮像装置の撮像範囲が、隣合の関係、一部重複しているなどの関係があってよい。一の撮像装置の撮像範囲と、他の撮像装置の撮像範囲が、隣合の関係の場合、特徴点がそのまま移動していれば、一の撮像装置の撮像範囲にあった特徴点は、他の撮像装置の撮像範囲に移動することが推定されるため、一の撮像装置の撮像範囲内の対象は、他の撮像装置の撮像範囲内の対象に移動したと推定することで、対象の疑似同一対象であるかが判定されてよい。また、一の撮像装置の撮像範囲と、他の撮像装置の撮像範囲が、一部重複している場合、かかる重複している範囲にある特徴点は、一の撮像装置の撮像範囲と、他の撮像装置の撮像範囲と、に同時に存在するため、かかる情報を用いることで、特徴点の同一性を判定するなどによって、対象の疑似同一対象であるかを判定してよい。 Further, when the tracking unit uses the positional relationship of the imaging range of the imaging device, for example, the imaging range of one imaging device and the imaging range of another imaging device are adjacent to each other or partially overlap. There may be a relationship. When the imaging range of one imaging device and the imaging range of another imaging device are adjacent to each other, if the feature points move as they are, the feature points in the imaging range of one imaging device are other. Since it is estimated that the object moves to the imaging range of the imaging device of the above, it is estimated that the object within the imaging range of one imaging device has moved to the object within the imaging range of the other imaging device, thereby simulating the object. It may be determined whether they are the same object. Further, when the imaging range of one imaging device and the imaging range of another imaging device partially overlap, the feature points in the overlapping range are the imaging range of one imaging device and the other. Since it exists at the same time as the imaging range of the imaging device of the above, it may be determined whether or not the object is a pseudo-identical object by determining the identity of the feature points by using such information.
 また、追跡部が、撮像の時間情報を用いる場合としては、上述の、一の撮像装置の撮像範囲と、他の撮像装置の撮像範囲が、隣合の関係の場合、特徴点がそのまま移動していれば、一の撮像装置の撮像範囲にあった特徴点は、他の撮像装置の撮像範囲に移動することが推定されることは、時間情報を用いることで、かかる関係を推定し、特徴点の同一性を判定して、対象の疑似同一対象であるかを判定してよい。 Further, when the tracking unit uses the time information of imaging, when the imaging range of one imaging device and the imaging range of another imaging device are adjacent to each other, the feature points move as they are. If so, it is presumed that the feature points in the imaging range of one imaging device move to the imaging range of another imaging device. By using the time information, such a relationship is estimated and the feature The identity of the points may be determined to determine whether the object is a pseudo-identical object.
 上述のとおり、一例のシステムの追跡部は、対象の移動を、複数の撮像装置による各動画をまたいで、対象の移動を検知してよく、これにより、対象の移動を特定してよい。 As described above, the tracking unit of the system of one example may detect the movement of the target across each moving image by a plurality of imaging devices, thereby specifying the movement of the target.
 追跡部は、かかる対象の移動の情報に基づいて、かかる対象の位置と時間とを関係づけた情報を記憶してよい。図13は、かかる記憶の一例である。 The tracking unit may store information relating the position and time of the target based on the information on the movement of the target. FIG. 13 is an example of such a memory.
 追跡部は、かかる情報を用いて、特定の一の時間から、特定の他の時間の間の、特定の期間において、対象の位置を特定する機能を有してよく、かかる対象の位置を取得する機能を有してよい。 The tracking unit may have a function of identifying the position of the target in a specific period from a specific time to a specific other time by using such information, and obtains the position of the target. May have the function of
 また、追跡部は、境界場所データベースを用いて、特定の時間帯と、一の一部情報に係る対象に対して、特定の位置を特定する機能を有してよい。これにより、例えば、ある対象がある時間帯にどこにいたのかを特定できる利点がある。また、追跡部は、境界場所データベースを用いて、特定の時間帯と特定の場所に対して、一又は複数の一部情報又は前記一部情報に係る対象を特定する機能を有してよい。これにより、例えば、ある特定の時間帯と特定の場所に、どのような対象がいたのかを特定できる利点がある。また、追跡部は、境界場所データベースを用いて、特定の場所と一の一部情報に係る対象に対して、特定の時間帯を特定する機能を有してよい。これにより、例えば、どのような時間帯に、特定の人が特定の場所にいたのかを特定できる利点がある。 Further, the tracking unit may have a function of specifying a specific position for a specific time zone and a target related to a part of information by using the boundary location database. This has the advantage that, for example, it is possible to identify where an object was at a certain time zone. In addition, the tracking unit may have a function of specifying one or more partial information or an object related to the partial information for a specific time zone and a specific place by using the boundary location database. This has the advantage that, for example, it is possible to identify what kind of object was in a specific time zone and a specific place. In addition, the tracking unit may have a function of specifying a specific time zone for a target related to a specific place and a part of information by using the boundary place database. This has the advantage that, for example, it is possible to identify when a specific person was in a specific place.
 以上の各機能を用いた、一例のシステムの処理の流れを図20に示した。なお、ここで、流れ2002乃至2008は、各情報の生成の順序は、情報の生成に必要な情報が用意されている限り、適宜変更されてよい。また、本図では、第1一部情報と第2一部情報とを生成した後に流れ2002乃至2008の処理を図示したが、これは一例であり、第1一部情報の生成の後に位置を特定し、その後に第2一部情報を生成したり、境界部の管理(2006)が第1一部情報の生成(2001)の前に処理さるなど、各機能を処理する前提となる情報が揃った場合には処理できるよう処理の流れが変更されてよい。 FIG. 20 shows the processing flow of an example system using each of the above functions. Here, in the flows 2002 to 2008, the order of generating each information may be appropriately changed as long as the information necessary for generating the information is prepared. Further, in this figure, the processing of the flow 2002 to 2008 is shown after the first part information and the second part information are generated, but this is an example, and the position is determined after the generation of the first part information. Information that is a prerequisite for processing each function, such as identifying and then generating the second part information, and managing the boundary part (2006) before generating the first part information (2001), is available. When they are complete, the processing flow may be changed so that they can be processed.
2.実施形態例
2.1.<一システム例>
 一例のシステムは、図14で図示された構成を備えてよい。本図において、一例のシステムは、情報処理装置1410から構成されてよい。情報処理装置1401は、演算装置1411、記憶装置1412、バス1415、および通信装置1416を備えるものであってよい。演算装置1411は、演算機能を有するものであってよく、CPU、GPU、などであってよく、種々の命令を実行できる機能を有してよい。また、キャッシュなどの記憶機能を有してもよい。記憶装置1412は、記憶機能を有する記憶装置であればよく、一次乃至三次記憶装置であってもよいし、揮発性・不揮発性のメモリであってもよい。記憶装置は、本願書類内でソフトウェアで実行可能なプログラム、及び、かかるプログラムに係る命令によって処理予定の情報や処理後の情報を、記憶してよい。また、記憶は、一時的な記憶であってもよいし、永続的な記憶であってもよい。バス1415は、情報処理装置内の情報の伝達機能を有してよい。また、通信装置1416は、ネットワークを介して情報を伝達するための機能を有してよい。
2. 2. Example of Embodiment
2.1. <One system example>
An example system may include the configuration illustrated in FIG. In this figure, the system of one example may be composed of the information processing apparatus 1410. The information processing device 1401 may include an arithmetic unit 1411, a storage device 1412, a bus 1415, and a communication device 1416. The arithmetic unit 1411 may have an arithmetic function, may be a CPU, a GPU, or the like, and may have a function capable of executing various instructions. Further, it may have a storage function such as a cache. The storage device 1412 may be a storage device having a storage function, may be a primary to tertiary storage device, or may be a volatile / non-volatile memory. The storage device may store a program that can be executed by software in the documents of the present application, and information scheduled to be processed or information after processing by an instruction related to the program. Further, the memory may be a temporary memory or a permanent memory. The bus 1415 may have a function of transmitting information in the information processing device. Further, the communication device 1416 may have a function for transmitting information via a network.
 本図では、情報処理装置1401を、一つの演算装置と一つの記憶装置を備えるものとしたが、演算装置は複数備えてもよく、また、記憶装置も複数備えてよい。また、演算装置は、種々のタイプの演算装置であってよく、記憶装置も種々のタイプの記憶装置であってよい。また、情報処理装置1401自体、一の情報処理装置から構成されるものであってもよいし、複数の情報処理装置から構成されるものであってよい。また、情報処理装置1401は、クラウド上であってもよいし、サーバなどであってもよい。 In this figure, the information processing device 1401 is provided with one arithmetic unit and one storage device, but a plurality of arithmetic units may be provided, and a plurality of storage devices may also be provided. Further, the arithmetic unit may be various types of arithmetic units, and the storage device may also be various types of storage devices. Further, the information processing device 1401 itself may be composed of one information processing device, or may be composed of a plurality of information processing devices. Further, the information processing device 1401 may be on the cloud or may be a server or the like.
 一例のシステムは、AI装置1418を備えてよい。AI装置1418は、本図においては、ネットワーク1417を介して、情報処理装置1410と接続可能に図示しているが、情報処理装置1410内であったり、情報処理装置1410と直接接続されてもよい。この場合、ネットワーク1417を介する必要がないため、情報の連絡を迅速にできる利点がある。 An example system may include an AI device 1418. Although the AI device 1418 is shown in this figure so as to be connectable to the information processing device 1410 via the network 1417, the AI device 1418 may be inside the information processing device 1410 or may be directly connected to the information processing device 1410. .. In this case, since it is not necessary to go through the network 1417, there is an advantage that information can be communicated quickly.
 一例のシステムは、情報処理装置1401に加えて、一又は複数の撮像装置1401a乃至cを備えてもよい。撮像装置については、後述する。 An example system may include one or more imaging devices 1401a to c in addition to the information processing device 1401. The imaging device will be described later.
 また、一例のシステムは、表示装置1414、および/又は、入力装置1413を備えてよい。本図において、表示装置は、情報処理装置1410内における表示装置として図示したが、情報処理装置1410とは独立の情報処理装置であってもよい。また、かかる表示装置に係る情報処理装置は、1又は複数あってよく、ネットワーク1417を介してもよい。また、表示装置に係る情報処理装置は、端末であってもよく、本例のシステムを管理する管理者によって使用されてもよいし、本例のシステムを利用する利用者によって使用されてもよい。また、かかる表示装置に対応して、入力装置1413が備えられてよい。入力装置は表示装置と一体化されたタッチパネルであってもよいし、別装置であってもよい。入力装置は、キーボード、ポインター、マウス、などであってよい。一例のシステムは、これらの入力装置、及び/又は、表示装置を、含んでもよいし、含まなくともよい。また、本願書類において、情報の表示が説明される場合、一例のシステムは、表示装置においてかかる表示が可能なように、かかる表示装置に係る情報処理装置に対して、かかる情報を伝達できる伝達部を備えてよい。また、本願書類において、情報の入力が説明される場合、一例のシステムは、入力装置においてかかる入力が可能なように、かかる入力に係る情報処理装置にからかかる情報を取得できる取得部を備えてよい。 Further, the system of one example may include a display device 1414 and / or an input device 1413. In this figure, the display device is shown as a display device in the information processing device 1410, but the display device may be an information processing device independent of the information processing device 1410. Further, there may be one or more information processing devices related to such a display device, and the information processing device may be via the network 1417. Further, the information processing device related to the display device may be a terminal, may be used by an administrator who manages the system of this example, or may be used by a user who uses the system of this example. .. Further, an input device 1413 may be provided corresponding to such a display device. The input device may be a touch panel integrated with the display device, or may be a separate device. The input device may be a keyboard, pointer, mouse, or the like. An example system may or may not include these input devices and / or display devices. Further, when the display of information is explained in the documents of the present application, the system of one example is a transmission unit capable of transmitting such information to the information processing device related to the display device so that the display can be performed. May be equipped. Further, when the input of information is explained in the documents of the present application, the system of the example includes an acquisition unit capable of acquiring the information from the information processing device related to the input so that the input can be performed by the input device. Good.
 次に、一例のシステムの適用例を、図15を用いて、説明する。本図は、例えば、デパートの一フロアなどが想定されているが、ショッピングモール、店舗、事務所、金融機関、医療施設、宿泊施設、官公庁、教育施設、文化施設、スポーツ施設、工場、航空機施設、車両施設及びこれらの附属施設等これらに限られず、一又は複数の撮像装置が設置される場所であれば、施設内外問わず同様に考えることが可能であることは、いうまでもない。 Next, an application example of an example system will be described with reference to FIG. This figure assumes, for example, one floor of a department store, but shopping malls, stores, offices, financial institutions, medical facilities, accommodation facilities, government offices, educational facilities, cultural facilities, sports facilities, factories, aircraft facilities. Needless to say, it is not limited to these, such as vehicle facilities and their affiliated facilities, and it is possible to think in the same way both inside and outside the facility as long as it is a place where one or more imaging devices are installed.
 施設外の例としては、上述の施設の外、農場、鉱山、山野、又はこれらに建設された人工物など、が挙げられ、これらの場所に後述の撮像装置が設置されてよい。 Examples of the outside of the facility include outside the above-mentioned facility, a farm, a mine, a mountain field, or an artificial object constructed in these, and an imaging device described later may be installed in these places.
 また、後述の撮像装置は、上述の施設内外の場所に設置されていてもよいが、特定の場所に設置されていなくともよい。例えば、空中を飛行状態のドローンに設置されることによって、施設とは異なる視点から撮像可能にされるものであってもよい。 Further, the imaging device described later may be installed in a place inside or outside the above-mentioned facility, but it does not have to be installed in a specific place. For example, by installing it in a drone in flight in the air, it may be possible to take an image from a viewpoint different from that of the facility.
 本図においては、撮像装置(1A、1B、1C、1D)と、前記撮像装置の撮像範囲(2A、2B、2C、2D)と、人(3~5)と、が模式的に図示されている。 In this figure, an imaging device (1A, 1B, 1C, 1D), an imaging range (2A, 2B, 2C, 2D) of the imaging device, and a person (3 to 5) are schematically illustrated. There is.
 撮像装置は、種々のタイプの撮像装置であってよい。例えば、デジタルカメラやアナログカメラなどその種類を問わない。また撮像の目的も、防犯、来店者や従業員の監視、マーケティング、記録などであってよい。また、カメラは、ステレオカメラ、パノラマカメラ、工事カメラ、水中カメラなどであってよい。レンズは、普通のレンズであってもよいし、魚眼レンズ、超広角レンズ、広広角レンズなどの広い広角のレンズが用いられてよい。一のカメラで、通常のレンズより広い範囲を撮影することが可能となる利点がある。特に、広い領域を撮像したい場合、普通のレンズを用いた撮像装置の個数と比較して、広い広角のレンズが用いることによって、撮像装置の個数を減少できる利点がある。 The image pickup device may be various types of image pickup devices. For example, it does not matter what kind of camera it is, such as a digital camera or an analog camera. The purpose of imaging may also be crime prevention, monitoring of visitors and employees, marketing, recording, and the like. The camera may be a stereo camera, a panoramic camera, a construction camera, an underwater camera, or the like. The lens may be an ordinary lens, or a wide-angle lens such as a fisheye lens, an ultra-wide-angle lens, or a wide-angle lens may be used. There is an advantage that one camera can shoot a wider range than a normal lens. In particular, when it is desired to image a wide area, there is an advantage that the number of image pickup devices can be reduced by using a wide-angle lens as compared with the number of image pickup devices using an ordinary lens.
 各撮像装置は、動画を撮像してもよいし、静止画を多数撮像してもよい。これらに基づいた画像に対し、一例の一部情報抽出部が、一部情報を抽出してよい。かかる一部情報に係る対象としては、例えば、人、ペット、車両、などであってよい。そして、境界に係る情報における数は、人の数、ペットの数、車両の数などであってよい。 Each imaging device may capture a moving image or a large number of still images. The partial information extraction unit of one example may extract partial information from the image based on these. The target of the partial information may be, for example, a person, a pet, a vehicle, or the like. The number in the information related to the boundary may be the number of people, the number of pets, the number of vehicles, and the like.
 一例の統計処理部は、一部情報を用いて、前記一部情報の移動方向を特定してよい。かかる移動方向によって、前記一部情報が表す対象の移動方向を整理できる利点がある。 The statistical processing unit of one example may specify the moving direction of the partial information by using the partial information. Depending on the moving direction, there is an advantage that the moving direction of the target represented by the partial information can be arranged.
 一例のシステムが、人を対象として、一部情報を特徴点に係るクラスタである場合において、本図では、人3は一人、人4は2人、人5は3人、などのように、各クラスタに対して、対象の数が特定されてよい。一例の統計処理部は、かかる情報を記憶してよい。例えば、図16は、一例の統計処理部が記憶する例である。このように、一例の統計処理部は、人のような対象が移動する方向を示す情報を生成してよい。対象の移動する方向が生成されることにより、対象の移動方向を整理できる利点がある。 In the case where the system of one example is a cluster of people and some information related to feature points, in this figure, person 3 is one person, person 4 is two people, person 5 is three people, and so on. The number of targets may be specified for each cluster. An example statistical processing unit may store such information. For example, FIG. 16 shows an example stored by the statistical processing unit of one example. In this way, the statistical processing unit of one example may generate information indicating the direction in which an object such as a person moves. By generating the moving direction of the target, there is an advantage that the moving direction of the target can be arranged.
 図17は、一例の統計処理部が生成した移動方向が表示されている一例である。本図において、各人(3乃至5)が移動する方向が、矢印によって、表示されている。対象の移動する方向が表示されることにより、閲覧者は対象の移動方向を理解できる利点がある。なお、表示態様は、矢印に限らず、閲覧者が理解可能な種々の態様であってよい。 FIG. 17 is an example in which the movement direction generated by the statistical processing unit of one example is displayed. In this figure, the direction in which each person (3 to 5) moves is indicated by an arrow. By displaying the moving direction of the target, the viewer has an advantage that the moving direction of the target can be understood. The display mode is not limited to the arrow, and may be various modes that the viewer can understand.
 また、一例の統計処理部は、境界に係る情報を、種々のイメージで表示してよい。例えば、統計処理部は、境界に係る情報として、上述の一例であるが、一の境界と交差した一部情報の所定期間の合計数を用いてよい。かかる合計数の多寡を、イメージを用いて表示してよい。かかるイメージは、人が直感的に感じられるものであれば種々のものであってよく、例えば、該当領域を示す箇所における数を表すものとして、パターンの透明度であったり、グラデーションの透明度であったり、線の量、色、又は人数の多少を示すマークなどであってよい。パターンやグラデーションの透明度は、透明度が低いほど数が多いことを示し、透明度が高いほど数が少ないことを示し、線の量は多いほど数が多く線の量が少ないほど数が少ないことを示し、色は赤系であるほど数が多く青系であるほど数が少ないことを示す、などの人の感性に関連するものが選択されてよい。また、マークは、予め定められたマークがされてよい。例えば、数が多いマーク、数が中ぐらいのマーク、数が少ないマークなどが予め定められており、かかるマークが、対象とされる領域に表示される態様であってよい。また、パターンは、時間帯と関連付けられて、各時間帯における各領域内の数の多少を示すアニメーションによって、表示されてもよい。この場合、閲覧者は、各時間における数の多少の変化をより理解しやすい利点がある。 Further, the statistical processing unit of one example may display the information related to the boundary in various images. For example, although the statistical processing unit is the above-mentioned example as the information relating to the boundary, the total number of some information intersecting the boundary for a predetermined period may be used. The total number of such numbers may be displayed using an image. Such an image may be of various kinds as long as it can be intuitively felt by a person. , The amount of lines, the color, or the mark indicating the number of people. The transparency of patterns and gradations indicates that the lower the transparency, the higher the number, the higher the transparency, the lower the number, and the higher the amount of lines, the higher the number, and the lower the amount of lines, the lower the number. , The more reddish the number is, the smaller the number is, and the more bluish the color is, the more the number is related to the human sensitivity. Further, the mark may be a predetermined mark. For example, a mark having a large number, a mark having a medium number, a mark having a small number, and the like may be predetermined, and such a mark may be displayed in a target area. The pattern may also be associated with a time zone and displayed by an animation showing the number in each region in each time zone. In this case, the viewer has the advantage that it is easier to understand the slight change in the number at each time.
 また、統計情報としての境界に係る情報は、リアルタイムに生成されてよい。リアルタイムに生成される場合、所定の期間の合計数などの演算なしに現在の一部情報に係る数を表示してもよいし、1分、3分、5分、30分、1時間などの所定の期間の、閲覧の状況に応じて比較的短い期間における合計数が表示されてもよい。数の多少がリアルタイムに表示された場合、閲覧者は、混雑度を理解することができる。一例の統計処理部が、店舗の管理者を閲覧者とするように店舗用の端末で人数の多少のイメージを表示する場合、混雑箇所に店舗の人員を配置する、顧客を誘導するなど、混雑に適切に対応可能となる。また、一例のシステムの統計処理部が、店舗の訪問者を閲覧者とするように、例えば、店舗の入り口周辺の箇所であれば、来店者に対して、混雑箇所の情報を提供することができる利点がある。また、リアルタイムに表示される場所がWEB上であれば、利用者は携帯端末を利用したWEB上で混雑箇所を理解することができる利点がある。 In addition, information related to boundaries as statistical information may be generated in real time. When generated in real time, the number related to some current information may be displayed without calculation such as the total number in a predetermined period, such as 1 minute, 3 minutes, 5 minutes, 30 minutes, 1 hour, etc. The total number in a relatively short period of time may be displayed depending on the browsing situation. When some of the numbers are displayed in real time, the viewer can understand the degree of congestion. When the statistical processing department of one example displays a small image of the number of people on the terminal for the store so that the manager of the store is the viewer, it is crowded by allocating the staff of the store to the crowded place, guiding customers, etc. Can be dealt with appropriately. In addition, the statistical processing unit of the system in the example may provide information on the congested area to the visitor, for example, in the area around the entrance of the store, so that the visitor of the store is the viewer. There are advantages that can be done. Further, if the place displayed in real time is on the WEB, there is an advantage that the user can understand the congested place on the WEB using the mobile terminal.
 また、一例の統計処理部は、統計情報として、一部情報に係る位置と、前記位置の滞在時間と、店舗内の商品棚などの商品と、の関係を生成してよい。これにより、一部情報に係る利用者が、前記商品に対して、どの程度の時間閲覧しているかの情報を収集することができる。また、更に、一例の統計処理部は、統計情報として、かかる滞在時間を用いて、商品に対する興味度を生成してもよい。これにより、前記利用者の、前記商品に対する興味度を生成できる利点がある。 Further, the statistical processing unit of one example may generate, as statistical information, a relationship between a position related to some information, a staying time at the position, and a product such as a product shelf in a store. As a result, it is possible to collect information on how long the user related to the partial information is browsing the product. Further, the statistical processing unit of one example may generate the degree of interest in the product by using the staying time as the statistical information. This has the advantage that the degree of interest of the user in the product can be generated.
 また、一例の統計処理部は、境界に係る情報として、種々のランキングを生成してよい。かかるランキングは、一部情報に係る対象としての人の訪問人数の多寡に基づき、施設内の店舗のランキングであってよい。例えば、店舗Aが1位、店舗Bが2位、などのランキングであってよい。この場合、店舗の訪問者数に関する情報を生成できる利点がある。 Further, the statistical processing unit of one example may generate various rankings as information related to boundaries. Such a ranking may be a ranking of stores in the facility based on the number of visitors of a person as a target for some information. For example, store A may be ranked first, store B may be ranked second, and so on. In this case, there is an advantage that information regarding the number of visitors to the store can be generated.
 また、一例の統計処理部は、ランキングとして、特定の店舗の前又は後に訪問した店舗の多い又は少ない順のランキングを生成してよい。この場合、利用者が特定の店舗の前又は後にどのような店舗を訪問しているかにより、利用者の店舗間の関係の情報を生成できる利点がある。 Further, the statistical processing unit of one example may generate a ranking in the order of most or few stores visited before or after a specific store as a ranking. In this case, there is an advantage that information on the relationship between the user's stores can be generated depending on what kind of store the user visits before or after the specific store.
 また、一例の統計処理部は、ランキングとして、利用者が、施設内のi番目(i>=1)に訪問した店のランキングを生成してよい。この場合、利用者が、特定順番目にどの店舗を訪問したのかの情報を生成できる利点がある。特に、一例の統計処理部が、利用者が、施設内の1番目に訪問した店のランキングを生成した場合、施設内において最初に訪問される店舗の情報を生成できる。また、一例の統計処理部が、利用者が施設内の最後に訪問した店のランキングを生成した場合、施設内において最後に訪問される店舗の情報を生成できる利点がある。 Further, the statistical processing unit of one example may generate a ranking of the stores visited by the user at the i-th (i> = 1) in the facility as a ranking. In this case, there is an advantage that the user can generate information on which store is visited in a specific order. In particular, when the statistical processing unit of one example generates the ranking of the first visited store in the facility, the information of the first visited store in the facility can be generated. Further, when the statistical processing unit of one example generates the ranking of the last visited store in the facility, there is an advantage that the information of the last visited store in the facility can be generated.
 また、一例の統計処理部は、ランキングとして、利用者が、i店舗数(i>=1の正数)のみ訪問した場合における、i店舗の数のランキングを生成してもよい。この場合、i店舗数という限られた店舗数におけるランキングを生成できる利点がある。特に、例えば、一例の統計処理部が、利用者によって、1店舗のみ訪問された場合における店舗のランキングを生成した場合、利用者が、1店舗のみ訪問するというような明確に目的となる訪問先店舗に関するランキングを生成できる利点がある。 Further, the statistical processing unit of one example may generate a ranking of the number of i-stores when the user visits only the number of i-stores (a positive number of i> = 1) as a ranking. In this case, there is an advantage that a ranking can be generated in a limited number of stores, which is the number of i stores. In particular, for example, when the statistical processing unit of one example generates a ranking of stores when only one store is visited by the user, the user visits only one store, which is a clearly intended destination. It has the advantage of being able to generate rankings for stores.
 また、一例の統計処理部は、ランキングとして、特定の店舗の前後の訪問箇所に関するランキングを生成してよい。訪問箇所のランキングは、特定の店舗の前に訪問した店舗、場所のランキングであってよいし、特定の店舗の後に訪問した店舗、場所のランキングであってよい。この場合、特定の店舗の前後にどのようなところに訪問しているのかを理解できる利点がある。 Further, the statistical processing unit of one example may generate a ranking regarding the places visited before and after a specific store as a ranking. The ranking of the visited places may be the ranking of the stores and places visited before the specific store, or the ranking of the stores and places visited after the specific store. In this case, there is an advantage that it is possible to understand what kind of place is visited before and after a specific store.
 また、一例の統計処理部は、特定の店舗の所定の数の前に訪問した店舗、場所のランキングであってよいし、特定の店舗の所定の数の後に訪問した店舗、場所のランキングであってよい。例えば、特定の店舗の1つ次に訪問した店舗であってもよいし、特定の店舗の二つ次に訪問した店舗であってもよい。同様に、例えば、特定の店舗の1つ前に訪問した店舗であってもよいし、特定の店舗の二つ前に訪問した店舗であってもよい。これにより、特定の店舗の所定の数の前後に訪問した店舗、場所を理解でき、場合によって、かかる店舗や場所からかかる特定の店舗へのルートを理解できる利点がある。かかるルートを理解できる場合、利用者は、特定の店舗に関する広告の容易やちらしの配布などのマーケティング活動を行うことができる利点がある。 Further, the statistical processing unit of one example may be a ranking of stores and places visited before a predetermined number of specific stores, or a ranking of stores and places visited after a predetermined number of specific stores. You can. For example, it may be the store visited first of the specific store, or it may be the store visited secondarily of the specific store. Similarly, for example, it may be a store visited one before a specific store, or a store visited two times before a specific store. This has the advantage of being able to understand the stores and locations visited before and after a predetermined number of specific stores, and in some cases, the route from such stores and locations to such specific stores. If the user can understand such a route, there is an advantage that the user can carry out marketing activities such as easy advertisement for a specific store and distribution of leaflets.
 一例のシステムは、
 施設内外に設置された撮像装置によって撮像された画像から一部情報を抽出する一部情報抽出部と、
 前記一部情報を用いて、前記一部情報に係る対象に係る情報を生成する情報生成部と、
を備えてよい。かかるシステムは、画像の一部に係る一部情報を用いて情報を生成するため、計算量を少なく情報を生成できる利点がある。特に一部情報が画像を含む動画において動きのある部分に係る場合、一部情報が動きのある対象に係る可能性が高いため、学習済みの情報生成機能を用いて効率的に対象に係る情報を生成できる利点がある。例えば、一部情報としての一部の画像が人を主にフォーカスする場合など、全体画像内で人を占める割合が高くなるため、人であることの識別や、性別、年齢、年齢層、身長、体格、表情などの人を特定の観点で示す情報、また、ファッションアイテムに関する情報についての情報を生成できる利点がある。
An example system is
A partial information extraction unit that extracts partial information from images captured by imaging devices installed inside and outside the facility,
An information generation unit that uses the partial information to generate information related to the target related to the partial information.
May be equipped. Since such a system generates information using a part of information related to a part of the image, there is an advantage that the amount of calculation is small and the information can be generated. In particular, when some information is related to a moving part in a moving image including an image, there is a high possibility that some information is related to a moving target, so information related to the target can be efficiently used by using the learned information generation function. Has the advantage of being able to generate. For example, when some images as partial information mainly focus on people, the proportion of people in the whole image is high, so it is possible to identify people, gender, age, age group, and height. It has the advantage of being able to generate information that indicates a person from a specific point of view, such as physique, facial expression, and information about fashion items.
 一例のシステムは、
 複数の一部情報を抽出する一部情報抽出部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts multiple partial information,
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
 一例のシステムは、
 複数の一部情報を抽出する一部情報抽出部と、
 前記複数の一部情報を用いて、関係を生成する一部情報関係生成部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。かかるシステムは、一部情報に係る関係を踏まえて、統計情報を生成するため、一部情報に係る対象の動きを簡易に生成しつつ、統計情報を生成できる利点がある。
An example system is
A partial information extraction unit that extracts multiple partial information,
A partial information relationship generation unit that generates a relationship using the plurality of partial information,
A statistical processing unit that generates statistical information using some of the above information,
May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
 上述の種々の機能や一例のシステムの他の適用例を、以下に説明する。以下では、繰り返しの説明を避けるため、各具体例における修正のみを主に説明するが、以下における適用例においても、上述の技術が適用されてよいことはいうまでもない。 The various functions described above and other application examples of an example system will be described below. In the following, only modifications in each specific example will be mainly described in order to avoid repeated explanations, but it goes without saying that the above-mentioned techniques may be applied to the application examples in the following.
2.2.作業者の動線例
 一例の統計処理部は、工場内に設置された撮像装置から取得された画像に基づく、一部情報に係る位置の動きを用いて、工場内などの作業者の動きに係る情報を生成してよい。例えば、一例の統計処理部は、一部情報に係る位置が示す作業者の動きを生成してよい。かかる動きにより、作業者がどのような動きをしているかを示す情報が生成できる利点がある。また、かかる動きは、表示されてよい。これにより、閲覧者は、例えば、作業者の動きに無駄がないかどうかを、確認できる利点がある。一例の統計処理部は、一部情報に係る位置が示す作業者の動きを、所定期間の平均値として生成し、表示させてもよい。この場合、所定期間の平均値としての、作業者の動きの情報を生成できる利点がある。また、一例のシステムは、追跡部を有し、複数の撮像装置間を超えた一部情報の移動を追跡してもよい。
2.2. Example of worker's flow line The statistical processing unit of one example uses the movement of the position related to some information based on the image acquired from the image pickup device installed in the factory to move the worker in the factory. Such information may be generated. For example, the statistical processing unit of one example may generate the movement of the worker indicated by the position related to some information. Such movement has an advantage that information indicating what kind of movement the worker is doing can be generated. In addition, such movements may be displayed. This has the advantage that the viewer can check, for example, whether or not the movement of the worker is wasteful. The statistical processing unit of one example may generate and display the movement of the worker indicated by the position related to some information as an average value for a predetermined period. In this case, there is an advantage that information on the movement of the worker can be generated as an average value for a predetermined period. In addition, the system of one example may have a tracking unit and track the movement of some information across a plurality of imaging devices.
 一例のシステムは、
 工場内に設置された撮像装置で撮像された画像に基づいて、作業者に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to the worker based on the image captured by the image pickup device installed in the factory.
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
 一例のシステムは、
 工場内に設置された撮像装置で撮像された画像に基づいて、作業者に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記複数の一部情報を用いて、関係を生成する一部情報関係生成部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to the worker based on the image captured by the image pickup device installed in the factory.
A partial information relationship generation unit that generates a relationship using the plurality of partial information,
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
2.3.交通車両例
 一例のシステムは、交差点や、高速道路の料金所、駐車場、道路脇など、車両が位置しうる場所に設置された交通カメラなどの撮像装置で撮像された画像に基づいて、一部情報を用いてよい。
2.3. The system of one example of a traffic vehicle is based on an image taken by an image pickup device such as a traffic camera installed in a place where a vehicle can be located, such as an intersection, a tollhouse on a highway, a parking lot, or a side of a road. Part information may be used.
 一例のシステムに係る統計処理部は、一部情報に基づいて、一部情報に係る車両の統計情報を生成してよい。特に交通カメラは、外部環境化で作動するものであり、屋内と比べれば、一般的には好条件で作動するものではないことから、画像解析において、学習済みモデルを適用する場合に計算負荷が増加することもある。そのため、一例のシステムは、学習済みモデルを適用しない場合、又はその適用回数を減少させる場合、上述の一部情報関係生成部を用いることで、車両等に関する統計情報を生成できる利点がある。 The statistical processing unit related to the system of one example may generate the statistical information of the vehicle related to the partial information based on the partial information. In particular, traffic cameras operate in an external environment and generally do not operate under favorable conditions compared to indoors. Therefore, in image analysis, a computational load is applied when a trained model is applied. It may increase. Therefore, the system of one example has an advantage that statistical information about a vehicle or the like can be generated by using the above-mentioned partial information relation generation unit when the trained model is not applied or the number of times of application thereof is reduced.
 一例のシステムは、
 車両が位置しうる場所に設置された撮像装置で撮像された画像に基づいて、車両に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an image pickup device installed in a place where the vehicle can be located.
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
 一例のシステムは、
 車両が位置しうる場所に設置された撮像装置で撮像された画像に基づいて、車両に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記複数の一部情報を用いて、関係を生成する一部情報関係生成部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an image pickup device installed in a place where the vehicle can be located.
A partial information relationship generation unit that generates a relationship using the plurality of partial information,
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
2.4.工事現場・鉱山例
 一例のシステムは、工事現場や鉱山に設置された撮像装置で撮像された画像に基づく一部情報を用いてよい。また、一部情報は、工事現場や鉱山で使用される車両であってよい。工事現場や鉱山に設置された撮像装置は、施設内の撮像装置と比べると、一般的には好条件で作動するものではないことから、画像解析において、学習済みモデルを適用する場合に計算負荷が増加することもある。そのため、一例のシステムは、学習済みモデルを適用しない場合、又はその適用回数を減少させる場合、上述の一部情報関係生成部及び/又は対象数推定部を用いることで、車両等に関する統計情報を生成できる利点がある。また、一例のシステムが車両の動きを生成した場合、車両を効率的に移動させるために参考となる情報を生成できる利点がある。
2.4. The system of the construction site / mine example example may use some information based on the image captured by the image pickup device installed at the construction site or the mine. In addition, some information may be vehicles used at construction sites and mines. Image pickup devices installed at construction sites and mines generally do not operate under favorable conditions compared to image pickup devices in facilities, so the computational load is calculated when applying a trained model in image analysis. May increase. Therefore, in the case of not applying the trained model or reducing the number of times the trained model is applied, the system of one example uses the above-mentioned partial information relation generation unit and / or target number estimation unit to obtain statistical information about the vehicle or the like. There is an advantage that it can be generated. Further, when the system of one example generates the movement of the vehicle, there is an advantage that reference information can be generated in order to move the vehicle efficiently.
 一例のシステムは、
 車両移動領域の一部又は全部を視野に含める位置に設置された撮像装置で撮像された画像に基づいて、車両に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an imaging device installed at a position that includes a part or all of the vehicle movement area in the field of view.
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
 一例のシステムは、
 車両移動領域の一部又は全部を視野に含める位置に設置された撮像装置で撮像された画像に基づいて、車両に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記複数の一部情報を用いて、関係を生成する一部情報関係生成部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。かかるシステムは、一部情報に係る関係を踏まえて、統計情報を生成するため、一部情報に係る対象の動きを簡易に生成しつつ、統計情報を生成できる利点がある。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to the vehicle based on an image captured by an imaging device installed at a position that includes a part or all of the vehicle movement area in the field of view.
A partial information relationship generation unit that generates a relationship using the plurality of partial information,
A statistical processing unit that generates statistical information using some of the above information,
May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
2.5.動物対象例
 一例のシステムは、撮像装置によって動物を撮像してよい。動物は、例えば、牛、豚、羊、鳥、馬、羊、ヤギ、トナカイ、などであってよい。動物は、家畜であってよい。特に、これらの動物が、屋外で生活しているなど広域で生活している場合、撮像装置は野外に設置されることとなる。この場合、広域を数少ない撮像装置によって撮像したり、施設内の撮像装置と比べると一般的には好条件で作動するものではないことため、画像解析において、学習済みモデルを適用する場合に計算負荷が増加することもある。そのため、一例のシステムは、学習済みモデルを適用しない場合、又はその適用回数を減少させる場合、上述の一部情報関係生成部及び/又は対象数推定部を用いることで、計算量の負担を低減しつつ、動物に関する統計情報を生成できる利点がある。
2.5. Animal Target Example The system of one example may image an animal with an imaging device. The animals may be, for example, cows, pigs, sheep, birds, horses, sheep, goats, reindeer, and the like. The animal may be livestock. In particular, when these animals live in a wide area such as living outdoors, the imaging device will be installed outdoors. In this case, a wide area is imaged by a few imaging devices, and it generally does not operate under favorable conditions compared to the imaging devices in the facility. Therefore, in image analysis, the calculation load is applied when the trained model is applied. May increase. Therefore, when the trained model is not applied or the number of times the trained model is applied is reduced, the system of one example reduces the burden of calculation amount by using the above-mentioned partial information relation generation unit and / or target number estimation unit. However, it has the advantage of being able to generate statistical information about animals.
 一例のシステムは、
 野外に設置された撮像装置で撮像された画像に基づいて、動物に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to animals based on an image captured by an image pickup device installed outdoors.
A statistical processing unit that generates statistical information using some of the above information,
May be equipped.
 一例のシステムは、
 野外に設置された撮像装置で撮像された画像に基づいて、動物に係る、複数の一部情報を抽出する一部情報抽出部と、
 前記複数の一部情報を用いて、関係を生成する一部情報関係生成部と、
 前記一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。かかるシステムは、一部情報に係る関係を踏まえて、統計情報を生成するため、一部情報に係る対象の動きを簡易に生成しつつ、統計情報を生成できる利点がある。
An example system is
A partial information extraction unit that extracts a plurality of partial information related to animals based on an image captured by an image pickup device installed outdoors.
A partial information relationship generation unit that generates a relationship using the plurality of partial information,
A statistical processing unit that generates statistical information using some of the above information,
May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
2.7.ドローン使用例
 撮像装置は、ドローンに備え付けられていてもよい。ドローンに撮像装置が備えられる場合、施設に設置される撮像装置と異なる位置から撮像できる利点がある。例えば、撮像装置を設置する場所がない場合であても、ドローンに備えられた撮像装置であれば、ドローンの飛行場所の変更により、より撮像場所を自由に設定できる利点がある。
2.7. Drone usage example The imaging device may be installed in the drone. When the drone is equipped with an image pickup device, there is an advantage that the image can be taken from a position different from that of the image pickup device installed in the facility. For example, even if there is no place to install the image pickup device, the image pickup device provided in the drone has an advantage that the image pickup place can be set more freely by changing the flight place of the drone.
 また、撮像装置を備えたドローンが、施設の高さよりも高い位置を飛行する場合、施設に備えられた撮像装置よりも高い高度から撮像することが可能となるため、より広い視野の撮像が可能になる利点がある。 In addition, when a drone equipped with an image pickup device flies at a position higher than the height of the facility, it is possible to take an image from a higher altitude than the image pickup device provided in the facility, so that an image with a wider field of view is possible. There is an advantage of becoming.
 また、撮像装置を備えたドローンが、略一定位置を飛行するようホバリングの制御がされる場合、かかるドローンによって撮像された画像は、略同一箇所からの撮像になるため、一部情報抽出部及び一部情報関係生成部によって、適切に、一部情報に関する情報を生成できる利点がある。また、低速で移動をする場合に関しても、予想される移動分を差し引いた後の画像を用いることで、ホバリング時と同様の処理を適用することができる。移動の予測に関しては、予め設定されている移動情報や、ジャイロ・GPS等によって取得された実際の移動情報、もしくは、画像の特徴点対応から推定される移動情報などを用いてよい。 Further, when the hovering is controlled so that the drone equipped with the image pickup device flies at a substantially fixed position, the images captured by the drone are captured from substantially the same location, so that some information extraction units and The partial information relation generation unit has an advantage that information on some information can be appropriately generated. Further, even in the case of moving at a low speed, the same processing as in hovering can be applied by using the image after subtracting the expected movement amount. As for the prediction of movement, preset movement information, actual movement information acquired by a gyro, GPS, or the like, or movement information estimated from the correspondence of feature points of an image may be used.
 撮像装置が備えられたドローンによって撮像される画像の対象は、上述の2.1乃至2.6で説明された種々の対象であってよいし、これら以外のものであってもよい。 The target of the image captured by the drone equipped with the image pickup device may be the various targets described in 2.1 to 2.6 described above, or may be other than these.
 一例のシステムは、
 ドローンが備える撮像装置で撮像された画像に基づいて、一又は複数の一部情報を抽出する一部情報抽出部と、
 前記一又は複数の一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。
An example system is
A partial information extraction unit that extracts one or more partial information based on the image captured by the image pickup device provided in the drone.
A statistical processing unit that generates statistical information using the one or more partial information.
May be equipped.
 一例のシステムは、
 ドローンが備える撮像装置で撮像された画像に基づいて、一又は複数の一部情報を抽出する一部情報抽出部と、
 前記一又は複数の一部情報を用いて、関係を生成する一部情報関係生成部と、
 前記一又は複数の一部情報を用いて、統計情報を生成する、統計処理部と、
を備えてよい。かかるシステムは、一部情報に係る関係を踏まえて、統計情報を生成するため、一部情報に係る対象の動きを簡易に生成しつつ、統計情報を生成できる利点がある。
An example system is
A partial information extraction unit that extracts one or more partial information based on the image captured by the image pickup device provided in the drone.
A partial information relationship generation unit that generates a relationship using the one or more partial information,
A statistical processing unit that generates statistical information using the one or more partial information.
May be equipped. Since such a system generates statistical information based on the relationship related to some information, there is an advantage that statistical information can be generated while easily generating the movement of the target related to some information.
2.7.通知システム例
 一例のシステムの統計処理部は、一部情報に係る対象が、所定の領域に侵入した場合に、一例のシステムに係る端末に、通知する通知部を有してよい。通知部によって、一例のシステムの利用者は、一部情報に係る対象が、所定の領域に侵入したことを理解できる利点がある。
2.7. Notification system example The statistical processing unit of the system of one example may have a notification unit that notifies the terminal of the system of one example when an object related to some information invades a predetermined area. The notification unit has an advantage that the user of an example system can understand that the target related to some information has invaded a predetermined area.
 例えば、特定の領域としては、進入禁止場所や、危険な場所、入ることが好ましくない場所、などが挙げられる。特定の領域が施設の内外にあれば、例えば、工場の敷地や、何らかの施設(例えば、高電線施設、変電施設、給水設備、病院)などが挙げられる。特定の領域が住居内にあれば、特定の領域として、例えば、キッチンなどの火の元があるところや、お風呂場などの水場などが挙げられるが、これらに限られない。 For example, specific areas include places where entry is prohibited, dangerous places, places where it is not desirable to enter, and the like. If a specific area is inside or outside the facility, it may be, for example, a factory site or some facility (for example, a high electric wire facility, a substation facility, a water supply facility, a hospital). If a specific area is in the house, the specific area includes, for example, a place where there is a source of fire such as a kitchen, or a water place such as a bathroom, but is not limited to these.
 また、一部情報に係る対象としては、例えば、未成年、高齢者、認知症患者、不審者、などが、挙げられる。この場合、システムに係る利用者は、システム管理者、未成年の親、高齢者の親族、認知症患者などの監護者などが挙げられ、システムに係る端末は、かかる者が利用する端末であってよい。 In addition, examples of the target related to some information include minors, elderly people, dementia patients, suspicious persons, and the like. In this case, the users related to the system include system administrators, parents of minors, relatives of the elderly, guardians of dementia patients, etc., and the terminals related to the system are the terminals used by such persons. You can.
 一例の統計処理部が、上述の通知を、侵入のタイミングに、前記端末に通知する場合、一例のシステムの利用者は、リアルタイムに、侵入を理解することができる。この場合、通知された端末に係る利用者は、侵入者、侵入時間、侵入場所などの侵入の態様に応じて、迅速に対応できる利点がある。 When the statistical processing unit of one example notifies the terminal of the above notification at the timing of intrusion, the user of the system of one example can understand the intrusion in real time. In this case, there is an advantage that the user related to the notified terminal can quickly respond according to the mode of intrusion such as the intruder, the intrusion time, and the intrusion location.
 なお、通知部は、上述の2.1乃至2.6の各例におけるシステムが備えてもよい。その場合、特定の場所は、各適用例に合わせて、進入禁止場所や、危険な場所、入ることが好ましくない場所、などが設定されてよく、また、一部情報に係る対象は、人、車両、家畜、などであってよい。 The notification unit may be provided by the system in each of the above-mentioned 2.1 to 2.6 examples. In that case, the specific place may be set as a prohibited place, a dangerous place, a place where it is not preferable to enter, etc. according to each application example, and the target related to some information is a person, It may be a vehicle, livestock, etc.
 本願書類の実施例において述べた発明例は、本願書類で説明されたものに限らず、その技術的思想の範囲内で、種々の例に適用できることはいうまでもない。また、外部の情報処理装置は、SAAS、PAAS、IAASなどのソフトウエアを用いるクラウドやサーバーなどであってよい。 It goes without saying that the invention examples described in the examples of the documents of the present application are not limited to those described in the documents of the present application, and can be applied to various examples within the scope of the technical idea. Further, the external information processing device may be a cloud or a server that uses software such as SAAS, PAAS, or IAAS.
 また、本願書類で説明される処理及び手順は、実施形態において明示的に説明されたものによってのみならず、ソフトウェア、ハードウェア又はこれらの組み合わせによっても実現可能なものであってよい。また、本願書類で説明される処理及び手順は、それらの処理・手順をコンピュータプログラムとして実装し、各種のコンピュータに実行させることが可能であってよい。またこれらのコンピュータプログラムは、記憶媒体に記憶されてよい。また、これらのプログラムは、非一過性又は一時的な記憶媒体に記憶されてよい。 Further, the processes and procedures described in the documents of the present application may be feasible not only by those explicitly described in the embodiments but also by software, hardware or a combination thereof. Further, the processes and procedures described in the documents of the present application may be able to be implemented by various computers by implementing the processes and procedures as a computer program. Further, these computer programs may be stored in a storage medium. Also, these programs may be stored on a non-transient or temporary storage medium.
1401a乃至1401c 撮像装置
1410 情報処理装置
1411 演算装置
1412 記憶装置
1413 入力装置
1414 表示装置
1415 バス
1416 通信装置
1417 ネットワーク
1418 AI装置
1401a to 1401c Image pickup device 1410 Information processing device 1411 Computing device 1412 Storage device 1413 Input device 1414 Display device 1415 Bus 1416 Communication device 1417 Network 1418 AI device

Claims (15)

  1.  対象画像から、前記対象画像を含む動画において動きのある部分に係る、前記対象画像内の一部情報を抽出する抽出部と、
     抽出された前記一部情報に基づき、前記一部情報に係る対象の情報を生成する生成部と、
    を備えたシステム。
    An extraction unit that extracts a part of information in the target image related to a moving part in a moving image including the target image from the target image.
    A generation unit that generates target information related to the partial information based on the extracted partial information.
    System with.
  2.  対象画像から、一部情報を抽出する抽出部と、
     抽出された前記一部情報に係る画像に基づき、前記一部情報に係る対象の情報を生成する生成部と、
    を備え、
    前記一部情報に係る画像は、背景領域の一部の画像情報量が、前記対象画像内において前記一部と対応する箇所の画像情報量よりも少ないものである、
    システム。
    An extraction unit that extracts some information from the target image,
    A generation unit that generates target information related to the partial information based on the extracted image related to the partial information.
    With
    In the image related to the partial information, the amount of image information of a part of the background region is smaller than the amount of image information of the portion corresponding to the part in the target image.
    system.
  3.  前記抽出部は、前記動画における対象の動きを用いて、前記一部情報を抽出する、
    請求項1に記載のシステム。
    The extraction unit extracts a part of the information by using the movement of the target in the moving image.
    The system according to claim 1.
  4.  前記抽出部は、前記動画を構成する複数の画像のうち、一の画像と、他の画像と、の差を用いて、前記一部情報を抽出する、
     請求項1に記載のシステム。
    The extraction unit extracts a part of the information by using the difference between one image and the other image among the plurality of images constituting the moving image.
    The system according to claim 1.
  5.  前記生成部は、前記一部情報に係る画像を、画像と対象との関係を機械学習した機械学習部に適用させることで、前記一部情報に係る画像に係る対象の情報を生成する、
    請求項1乃至4のいずれか1項に記載のシステム。
    The generation unit generates information on the target related to the image related to the partial information by applying the image related to the partial information to the machine learning unit in which the relationship between the image and the target is machine-learned.
    The system according to any one of claims 1 to 4.
  6.  前記生成部は、
     前記一部情報に係る画像を送信する送信部と、
     送信された前記一部情報に係る画像に対応する対象の情報を取得する取得部と、
    を備える請求項1乃至5のいずれか1項に記載のシステム。
    The generator
    A transmitter that transmits an image related to the partial information,
    An acquisition unit that acquires target information corresponding to an image related to the transmitted partial information, and an acquisition unit.
    The system according to any one of claims 1 to 5.
  7.  前記生成部は、画像と対象との関係を機械学習した機械学習部を備え、
     前記一部情報に係る画像を前記機械学習に適用させることにより、前記一部情報に係る画像に係る対象の情報を生成する、
    請求項1乃至6のいずれか1項に記載のシステム。
    The generation unit includes a machine learning unit that machine-learns the relationship between an image and an object.
    By applying the image related to the partial information to the machine learning, the target information related to the image related to the partial information is generated.
    The system according to any one of claims 1 to 6.
  8.  前記一部情報は、前記対象画像の一部の画像である、
    請求項1乃至7のいずれか1項に記載のシステム。
    The partial information is a partial image of the target image.
    The system according to any one of claims 1 to 7.
  9.  前記一部情報は、前記対象画像の一部の画像の特徴点に係る情報である、
    請求項1乃至8のいずれか1項に記載のシステム。
    The partial information is information relating to feature points of a partial image of the target image.
    The system according to any one of claims 1 to 8.
  10.  前記一部情報は、前記対象画像の背景差分に係る情報である、
    請求項1乃至9のいずれか1項に記載のシステム。
    The partial information is information related to background subtraction of the target image.
    The system according to any one of claims 1 to 9.
  11.  前記一部情報は、前記対象画像を2値化又は3値化した画像である、
    請求項1乃至10のいずれか1項に記載のシステム。
    The partial information is an image obtained by binarizing or binarizing the target image.
    The system according to any one of claims 1 to 10.
  12.  前記システムは、
     前記一部情報に係るクラスタに対し、予め定められたルールに基づいて、対象の数を推定する推定部と、
    を備える、
    請求項1乃至11のいずれか1項に記載のシステム。
    The system
    An estimation unit that estimates the number of targets based on predetermined rules for the cluster related to the partial information.
    To prepare
    The system according to any one of claims 1 to 11.
  13.  前記システムは、統計情報を生成する統計処理部をさらに備える、
    請求項1乃至12のいずれか1項に記載のシステム。
    The system further includes a statistical processing unit that generates statistical information.
    The system according to any one of claims 1 to 12.
  14.  コンピュータが、
     対象画像から、前記対象画像を含む動画において動きのある部分に係る一部の画像を抽出するステップと、
     抽出された前記一部の画像に基づき、前記一部の画像に係る対象の情報を生成するステップと、
    を実行する方法。
    The computer
    A step of extracting a part of a moving part of a moving image including the target image from the target image, and
    A step of generating target information related to the part of the image based on the extracted part of the image, and
    How to do.
  15.  コンピュータを、請求項1乃至13のいずれか一項に記載のシステムとして機能させるためのプログラム。 A program for operating a computer as the system according to any one of claims 1 to 13.
PCT/JP2019/021811 2019-05-31 2019-05-31 System, method, or program WO2020240851A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/021811 WO2020240851A1 (en) 2019-05-31 2019-05-31 System, method, or program
JP2019529956A JPWO2020240851A1 (en) 2019-05-31 2019-05-31 Information processing system, information processing device, server device, program, or method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/021811 WO2020240851A1 (en) 2019-05-31 2019-05-31 System, method, or program

Publications (1)

Publication Number Publication Date
WO2020240851A1 true WO2020240851A1 (en) 2020-12-03

Family

ID=73553712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/021811 WO2020240851A1 (en) 2019-05-31 2019-05-31 System, method, or program

Country Status (2)

Country Link
JP (1) JPWO2020240851A1 (en)
WO (1) WO2020240851A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016057998A (en) * 2014-09-12 2016-04-21 株式会社日立国際電気 Object identification method
JP2017163374A (en) * 2016-03-10 2017-09-14 株式会社デンソー Traffic situation analyzer, traffic situation analyzing method, and traffic situation analysis program
WO2019087742A1 (en) * 2017-11-01 2019-05-09 株式会社 東芝 Image sensor, sensing method, control system and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6348431B2 (en) * 2015-02-24 2018-06-27 株式会社日立製作所 Image processing method and image processing apparatus
JP6947508B2 (en) * 2017-01-31 2021-10-13 株式会社日立製作所 Moving object detection device, moving object detection system, and moving object detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016057998A (en) * 2014-09-12 2016-04-21 株式会社日立国際電気 Object identification method
JP2017163374A (en) * 2016-03-10 2017-09-14 株式会社デンソー Traffic situation analyzer, traffic situation analyzing method, and traffic situation analysis program
WO2019087742A1 (en) * 2017-11-01 2019-05-09 株式会社 東芝 Image sensor, sensing method, control system and program

Also Published As

Publication number Publication date
JPWO2020240851A1 (en) 2021-09-13

Similar Documents

Publication Publication Date Title
Mishra et al. Drone-surveillance for search and rescue in natural disaster
US11182598B2 (en) Smart area monitoring with artificial intelligence
WO2021036828A1 (en) Object tracking method and apparatus, storage medium, and electronic device
Benabbas et al. Motion pattern extraction and event detection for automatic visual surveillance
Nam et al. Intelligent video surveillance system: 3-tier context-aware surveillance system with metadata
Yan et al. Robot perception of static and dynamic objects with an autonomous floor scrubber
Zeng et al. Research on the algorithm of helmet-wearing detection based on the optimized yolov4
KR102333143B1 (en) System for providing people counting service
Iqbal et al. Autonomous Parking-Lots Detection with Multi-Sensor Data Fusion Using Machine Deep Learning Techniques.
Dubey et al. Identifying indoor navigation landmarks using a hierarchical multi-criteria decision framework
US11727580B2 (en) Method and system for gathering information of an object moving in an area of interest
Sun et al. Automated human use mapping of social infrastructure by deep learning methods applied to smart city camera systems
Rong et al. Big data intelligent tourism management platform design based on abnormal behavior identification
Nam et al. Inference topology of distributed camera networks with multiple cameras
Thakur et al. Autonomous pedestrian detection for crowd surveillance using deep learning framework
WO2020240851A1 (en) System, method, or program
Djeraba et al. Multi-modal user interactions in controlled environments
Wu et al. ADD: An automatic desensitization fisheye dataset for autonomous driving
CN115359568A (en) Simulation method for pedestrian intelligent body movement and emergency evacuation and computer equipment
Karaki et al. A comprehensive survey of the vehicle motion detection and tracking methods for aerial surveillance videos
Kim et al. Small object detection (SOD) system for comprehensive construction site safety monitoring
Aljuaid et al. Postures anomaly tracking and prediction learning model over crowd data analytics
Mudjirahardjo et al. Temporal analysis for fast motion detection in a crowd
Patino et al. Multicamera trajectory analysis for semantic behaviour characterisation
Wang et al. The Limo-Powered Crowd Monitoring System: Deep Life Modeling for Dynamic Crowd With Edge-Based Information Cognition

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019529956

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930396

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930396

Country of ref document: EP

Kind code of ref document: A1