CN111950520B - Image recognition method and device, electronic equipment and storage medium - Google Patents

Image recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111950520B
CN111950520B CN202010876946.XA CN202010876946A CN111950520B CN 111950520 B CN111950520 B CN 111950520B CN 202010876946 A CN202010876946 A CN 202010876946A CN 111950520 B CN111950520 B CN 111950520B
Authority
CN
China
Prior art keywords
region
target
cruise
initial detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010876946.XA
Other languages
Chinese (zh)
Other versions
CN111950520A (en
Inventor
李章勇
刘平平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN202010876946.XA priority Critical patent/CN111950520B/en
Publication of CN111950520A publication Critical patent/CN111950520A/en
Application granted granted Critical
Publication of CN111950520B publication Critical patent/CN111950520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The image identification method, the image identification device, the electronic equipment and the storage medium provided by the embodiment of the invention comprise the steps of acquiring a plurality of images to be identified corresponding to a target camera, and determining the region of an interested position in a video picture of the target camera at each acquisition moment according to a first region motion function corresponding to the interested position; and when the region corresponding to the interesting position and the region corresponding to the image to be identified matched with the target acquisition time have an overlapped region at the target acquisition time, identifying the image to be identified matched with the target acquisition time as a target image, wherein the target image comprises the interesting position. After a large number of images to be identified are extracted from the video picture of which the interested position is in the local area characteristic, the method can accurately retrieve the images containing the interested position from the huge number of images to be identified, screen out the images not containing the interested position, reduce time consumption for subsequent image analysis and improve efficiency.

Description

Image recognition method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of video monitoring, in particular to an image identification method and device, electronic equipment and a storage medium.
Background
In the video monitoring field, along with the development of high definition video acquisition technology, but surveillance camera coverage is bigger and bigger, through the analysis and the processing to the video or the image that surveillance camera obtained, effective clue can be found fast accurate, the effect of video image resource can full play.
In the prior art, before performing cue analysis on a video image, a large number of images are obtained in a way of snapshot by a snapshot camera or in a way of extraction of all regions in a video picture, and then the large number of images are structurally analyzed and analyzed, so as to hopefully obtain cues of a region of interest of a user. However, the above method can obtain a large number of images unrelated to the actual physical area concerned by the user, which increases the time consumption of the user for analyzing the images and reduces the efficiency of image application.
Therefore, how to identify the images related to the user attention area and screen out the images of the non-attention area reduces the time consumption of the user for analyzing the images and improves the efficiency of image application.
Disclosure of Invention
In view of the above, the present invention provides an image recognition method, an image recognition apparatus, an electronic device, and a storage medium, which are used to recognize an image related to a user attention area, screen out an image of a non-attention area, reduce time consumption of the user in analyzing the image, and improve efficiency of image application.
In order to achieve the purpose, the technical scheme of the invention is as follows:
in a first aspect, the present invention provides an image recognition method, including: acquiring a plurality of images to be identified corresponding to a target camera; each image to be identified corresponds to an acquisition moment; at each acquisition moment, determining the region of the interested position in the video picture of the target camera according to the first region motion function corresponding to the interested position; the location of interest characterizes an actual geographic location of interest to the user; the first region motion function represents the change relation of the region of the interested position in the video picture along with time; when at the target acquisition time, an area corresponding to the image to be identified matched with the target acquisition time and an area of the interesting position in the video picture have an overlapping area, identifying the image to be identified matched with the target acquisition time as a target image containing the interesting position; the target acquisition time is one or more of the acquisition times.
Optionally, before the step of acquiring a plurality of images to be recognized corresponding to the target camera, the method further includes: acquiring an initial detection area preset in the video picture and a state identification sum of the target camera; the initial detection area contains the location of interest; judging whether the target camera is in a cruising state or not according to the state identifier; if not, taking the initial detection area as the first area motion function; if so, acquiring the cruise state parameters of the target camera; determining a second area motion function of the initial detection area along with the change of time according to the cruise state parameters; and determining the first region motion function according to the initial detection region and the second region motion function.
Optionally, the step of determining a second region motion function of the initial detection region over time according to the cruise state parameters of the target camera includes: determining a second zone motion function based on a plurality of preset waypoint coordinates, a cruise sequence, a cruise speed and a cruise period of the cruise status parameter.
Optionally, the step of determining a second zone motion function based on a plurality of preset waypoint coordinates, a cruise sequence, a cruise speed and a cruise cycle of the cruise status parameter comprises: calculating a horizontal direction angle and an Euclidean distance corresponding to a cruising path between two preset navigation points adjacent in sequence according to the cruising sequence; determining the movement components of the initial detection area corresponding to different moments in the cruise cycle according to the cruise speed and the horizontal direction angle; and generating the second area motion function according to the coordinates corresponding to the initial detection area and the movement component.
Optionally, the step of determining, at each of the acquisition time instants, a region of an interest position in a video frame of the target camera according to a first region motion function corresponding to the interest position includes: determining each converted acquisition time according to a preset time corresponding to the initial detection area and the cruise period; and determining the region corresponding to the interested position according to each converted acquisition time and the first region motion function.
Optionally, before the step of acquiring the operation parameters of the target camera and the preset initial detection area in the video frame, the method further includes: and determining the target cameras according to the actual geographic positions, wherein the number of the target cameras is one or more.
In a second aspect, the present invention provides an image recognition apparatus comprising: the device comprises an acquisition module, a determination module and an identification module; the acquisition module is used for acquiring a plurality of images to be identified corresponding to the target camera; each image to be identified corresponds to an acquisition moment; the determining module is used for determining the region of the interested position in the video picture of the target camera according to the first region motion function corresponding to the interested position at each acquisition moment; the location of interest characterizes an actual geographic location of interest to the user; the first region motion function represents the change relation of the region of the interested position in the video picture along with time; the identification module is used for identifying the image to be identified matched with the target acquisition time as a target image containing the interested position when the region corresponding to the image to be identified matched with the target acquisition time and the region of the interested position in the video picture have an overlapped region at the target acquisition time; the target acquisition time is one or more of the acquisition times.
Optionally, the image recognition apparatus further includes a judgment module; the acquisition module is further used for acquiring a preset initial detection area in the video picture and the sum of the state identifiers of the target cameras; the initial detection area contains the location of interest; the judging module is used for judging whether the target camera is in a cruising state or not according to the state identifier; the determining module is further configured to determine the initial detection area as the first area motion function if the initial detection area is not the first area motion function; the obtaining module is further used for obtaining the cruise state parameters of the target camera if the cruise state parameters are the same as the cruise state parameters; the determination module is further used for determining a second area motion function of the initial detection area along with time according to the cruise state parameters; determining the first region motion function according to the initial detection region and the second region motion function.
In a third aspect, the present invention also provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor can execute the machine executable instructions to implement the image recognition method according to the first aspect.
In a fourth aspect, the present invention also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the image recognition method according to the first aspect.
The embodiment of the invention provides an image identification method, an image identification device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a plurality of images to be identified corresponding to a target camera, and determining an area of an interested position in a video picture of the target camera at each acquisition moment according to a first area motion function corresponding to the interested position; if the target acquisition time exists, the region corresponding to the interested position and the region corresponding to the image to be identified matched with the target acquisition time exist an overlapping region, the image to be identified matched with the target acquisition time is identified as the target image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of an image recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another image recognition method provided by the embodiment of the invention;
FIG. 3 is a schematic diagram illustrating the dynamic change of the region of interest in the video frame of the cruise camera according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of another image recognition method provided by the embodiment of the invention;
FIG. 5 is a schematic flow chart of another image recognition method provided by the embodiment of the invention;
FIG. 6 is a functional block diagram of an image recognition apparatus according to an embodiment of the present invention;
FIG. 7 is a functional block diagram of another image recognition apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
In the description of the present invention, it should be noted that, if the terms "upper", "lower", "inner", "outer", etc. are used to indicate the orientation or positional relationship based on the orientation or positional relationship shown in the drawings or the orientation or positional relationship which the product of the present invention is used to usually place, it is only for convenience of description and simplification of the description, but it is not intended to indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are only used to distinguish one description from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
In the field of video monitoring, with the development of a high-definition video acquisition technology, a coverage area of a monitoring camera is larger and larger, and monitoring cameras covering a larger area are often adopted in actual engineering to monitor so as to reduce equipment laying, so that the coverage area of a video image obtained by the monitoring camera is wider, the included clue features are more, however, case events are often performed on specific targets such as certain people and objects, and therefore, in a monitoring picture, information of actual interest of a user is often the features of a local area.
In the prior art, before a video image is subjected to cue analysis, a large number of images are obtained in a way of snapshotting by a snapshotting camera or in a way of extracting all regions in a video picture, and then structural analysis and analysis are performed on the large number of images so as to expect to obtain cues of a region of interest of a user, thereby supporting richer applications.
In order to solve the above technical problem, an embodiment of the present invention provides an image recognition method, which includes: the method comprises the steps of obtaining a large number of images to be recognized which are captured or extracted from a video picture, obtaining coordinates and acquisition moments of regions of the images to be recognized which are respectively positioned in the video picture through structural feature analysis, determining regions corresponding to interested positions at the acquisition moments according to motion functions of corresponding regions of the interested positions of users in the video picture, identifying the images to be recognized matched with the target acquisition moments as target images when the regions corresponding to the interested positions and the regions corresponding to the images to be recognized matched with the target acquisition moments are overlapped at the target acquisition moments, rapidly and accurately retrieving the images containing the interested positions, screening and selecting the images not containing the interested positions, reducing time consumed by analyzing the images by the users, and improving the efficiency of image application.
In an application scenario, the method can traverse all extracted images of the device at any time interval, identify the image containing the position of interest, and further can be applied to clue characteristic analysis, for example, the identified image can greatly improve timeliness and accuracy in aspects of backtracking, characteristic and rule research and judgment, clue search and the like, and in the public security industry, when a case event occurs at a specific physical position, an interference target can be filtered out for case handling policemen, relevant pedestrian and vehicle results can be rapidly provided for research and judgment, so that clue search time is saved, and relevant objects can be rapidly and accurately found.
To facilitate understanding of the principle and process of the present invention for achieving the above effect, please refer to fig. 1, where fig. 1 is a schematic flowchart of an image recognition method provided by an embodiment of the present invention, that is, the image recognition method may include the following steps:
and S107, acquiring a plurality of images to be recognized corresponding to the target camera.
In the embodiment of the invention, each image to be recognized corresponds to one acquisition time, in one application scene, a plurality of images to be recognized can be obtained by capturing in any time period through the target camera, and in another application scene, a plurality of images to be recognized can be obtained by extracting from a video picture of the target camera.
In the embodiment of the invention, the obtained images to be recognized can contain structural features corresponding to each image to be recognized, which are extracted in advance through a full-target structural feature extraction technology in the prior art, including the acquisition time and the area coordinates in a video picture, for example, a full-target structural extraction method used by most security manufacturers at present can be carried, fields in a standard data structure of an acquisition object in GA/T1400 are used for calculation, the range of application and the standardization degree of technology implementation can be ensured, and a non-perceptual data processing means can be provided for an application platform.
It can be understood that the number of the monitoring cameras in the embodiment of the present invention is often large, the monitoring area is wide, and therefore, the number of the images to be recognized is huge, and a large number of images in the non-attention area exist, and if the obtained images to be recognized are directly applied, the problems of long time consumption and low efficiency occur, and therefore, after a large number of images to be recognized are obtained, the subsequent steps can be performed.
And S108, at each acquisition moment, determining the area of the interested position in the video picture of the target camera according to the first area motion function corresponding to the interested position.
In the embodiment of the invention, the interesting position represents the actual geographical position concerned by the user; for example, after a user stops a bicycle at the door of a supermarket a, the bicycle flies without any flight, and when a clerk checks a monitoring video, the interested position is the area near the supermarket a, so that all images including the area near the supermarket a can be used as images to be analyzed, and the images not including the area near the supermarket a are obviously not helpful for case analysis and can be discarded. It is understood that the above-mentioned interesting position is set by the user according to the actual scene, and is not limited herein.
In the embodiment of the present invention, the first region motion function represents a change relationship of a region corresponding to the location of interest with time, and it can be understood that a user may set a region corresponding to the region of interest in a video picture of a camera, and if the camera is stationary, the region corresponding to the location of interest is always an initially set region, and once the camera starts to move, the region corresponding to the location of interest moves along with the video picture, so the region corresponding to the location of interest gradually changes with time.
And S109, when the region corresponding to the image to be recognized matched with the target acquisition time and the region of the interested position in the video picture have an overlapping region at the target acquisition time, recognizing the image to be recognized matched with the target acquisition time as a target image containing the interested position.
In an embodiment of the present invention, the target acquisition time is one or more of the acquisition times. It can be understood that, assuming that at the time of t acquisition, the area corresponding to the position of interest obtained by calculation according to the first area motion function is Q, and if the area corresponding to the image to be recognized corresponding to the time of t is Q', the "overlapping area exists between the area corresponding to the position of interest and the area corresponding to the image to be recognized matched at the time of target acquisition" may be understood as: q ≠ Q' ≠ 0, which indicates that the image to be recognized contains the interested position, so that the image to be recognized corresponding to the time t can be recognized as the target image; if the region corresponding to the interested position at the time t does not have an overlapping region with the region corresponding to the image to be recognized matched with the target acquisition time, namely Q ≈ Q' =0, it is indicated that the image to be recognized does not contain the interested position, and the image cannot be used as the target image, so that the image containing the interested position is accurately retrieved from the large number of images to be recognized, the image not containing the interested position is screened out, time consumption is reduced for subsequent image analysis, and efficiency is improved.
The image identification method provided by the embodiment of the invention comprises the steps of obtaining a plurality of images to be identified corresponding to a target camera, and determining the area of an interested position in a video picture of the target camera at each acquisition moment according to a first area motion function corresponding to the interested position; if the target acquisition time exists and the area corresponding to the interesting position and the area corresponding to the image to be identified matched with the target acquisition time overlap, identifying the image to be identified matched with the target acquisition time as the target image.
Optionally, in this embodiment of the present invention, when the camera is stationary, the first region motion function is a fixed value that does not change with time, and may be an initial region that is initially set by a user and includes a location of interest, and when the camera is cruising, a region corresponding to the location of interest moves along with a video frame, so to determine the first region function, a possible implementation is given below on the basis of fig. 1, referring to fig. 2, where fig. 2 is a schematic flow chart of another image recognition method provided in this embodiment of the present invention, and the method further includes:
s101, acquiring a preset initial detection area containing the interested position in a video picture and a state identifier of a target camera.
In the embodiment of the invention, the user can set the time t to be t according to the actual requirement 0 In the above case, a rectangular initial detection area is input in the video screen of the target camera, and may be written as: q 0 =(x min ,x max ,y min ,y max ) Wherein (x) can be used min ,y min )、(x max ,y min )、(x min ,y max )、(x max ,y max ) Four vertex coordinates representing the rectangular region, the region Q 0 Containing the actual geographic location of interest to the user. The state flag may be used to distinguish the cruise state of the target camera, e.g., state flag 0, indicating not cruising, state flag 1, indicating cruising.
It should be noted that the above "0" and "1" are only used as examples, and in some other application scenarios, the user may set different state identifiers according to the needs of the user to distinguish between the cruising state and the non-cruising state, which is not limited herein.
And S102, judging whether the target camera is in a cruising state or not according to the state identifier.
If not, executing step S103; if yes, steps S104 and S106 are executed.
S103, taking the initial detection area as a first area motion function corresponding to the interested position.
It will be appreciated that, when the subject camera is not cruising, the initial detection area will not change its position in the video frame over time, similarly, if the position of interest is still located in the initial detection region, then the motion function Q (t) = Q of the first region at this time 0
And S104, acquiring the cruise state parameters of the target camera.
And S105, determining a second area motion function of the initial detection area changing along with time according to the cruise state parameters.
In a possible implementation manner, the state parameter may include: a plurality of preset waypoints N = { N = { N = 1 ,N 2 ,...,N m The corresponding coordinate N i =(x' i ,y' i ) (ii) a Cruise sequence, which is understood to mean the sequence in which the target camera passes through the various preset waypoints, for example there are 5 preset cruise points N 1 、N 2 、N 3 、N 4 、N 5 Cruise sequence N 3 →N 1 →N 4 →N 5 →N 2 (ii) a The state parameters may also include a cruise speed v, e.g. the target camera may move straight at a constant speed between preset waypoints according to the speed determination v and the cruise sequence. The implementation of determining the second area movement function from the cruise status parameters may be:
and determining a second area motion function according to a plurality of preset navigation point coordinates, the cruise sequence and the cruise speed of the cruise state parameters.
For facilitating understanding of the above process, referring to fig. 3, fig. 3 is a schematic diagram of dynamic change of an area of a position of interest in a video frame of a cruise camera according to an embodiment of the present invention.
Continuing with FIG. 3, assume that the user is at t 0 =t i At the moment, at the preset waypoint N i An initial detection area Q is set 0 =(x min ,x max ,y min ,y max ) The area contains a location of interest to the user.
In the monitoring picture towards the preset navigation point N i+1 In the moving process, monitoring an initial detection area Q in a picture 0 And also moves synchronously, each time corresponds to a real-time dynamic initial detection area Q' (t), the real-time dynamic initial detection area is a linear function of time, and when moving to the time t, the real-time dynamic initial detection can be expressed as: (x) min +Δx(t),x max +Δx(t),y min +Δy(t),y max + Δ y (t)), where Δ x (t), Δ y (t) denote the real-time dynamic region Q '(t) relative to the initial detection region Q' (t) 0 The x-axis and y-axis of the image, and therefore, the region Q (t) where the interested location is located in the video picture is the gray shaded portion intersected in fig. 3, which can be expressed as Q (t) = Q' (t) # Q 0
Based on the schematic illustration of fig. 3 and the above explanation, the following may also give a possible implementation manner of determining the motion function of the second region, referring to fig. 4, where fig. 4 is a schematic flowchart of another image recognition method provided by the embodiment of the present invention, that is, step S105 may include the following steps:
s105-1, calculating a horizontal direction angle and an Euclidean distance corresponding to a cruising path between two adjacent preset navigation points according to the cruising sequence.
In the embodiment of the invention, in order to express the second dynamic function when the initial detection area moves along with the cruise picture, the preset waypoint parameters are transformed, and two preset points N which pass continuously are recorded i To N i+1 Has an Euclidean distance S between i Then, then
Figure BDA0002652871350000091
Memory slave N i To N i+1 Water cruising betweenAngle of plane direction theta i Then, then
Figure BDA0002652871350000101
And S105-2, determining the movement components of the initial detection area corresponding to different moments in the cruise cycle according to the cruise speed and the horizontal direction angle.
In the embodiment of the present invention, the movement component of the initial detection region in the cruise period may be expressed by the following relation:
Figure BDA0002652871350000102
Figure BDA0002652871350000103
where Δ x (t), Δ y (t) represent the real-time dynamic region Q' (t) relative to the initial detection region Q 0 The moving components of the x axis and the y axis of the vehicle are shown in the specification, v represents the cruising speed, and m represents the number of the preset waypoints.
And S105-3, generating a second area motion function according to the coordinates and the movement components corresponding to the initial detection area.
In the embodiment of the present invention, the horizontal direction angles of the preset waypoints are not the same, but since Q '(t) is moved as a whole, the movement components of the vertex coordinates of Q' (t) on the x axis or the y axis are the same, and the initial detection area is defined as Q 0 =(x min ,x max ,y min ,y max ) Then the real-time dynamic initial detection area can be represented as Q' (t) = (x) min (t),x max (t),y min (t),y max (t))。
And S106, determining a first region motion function corresponding to the interested position according to the initial detection region and the second region motion function.
In an embodiment of the present invention, the first region motion function for which the location of interest is located in the region of the video picture may be represented as Q (t) = Q' (t) # Q 0 Namely:
Q(t)=(x min +Δx(t),x max +Δx(t),y min +Δy(t),y max +Δy(t))∩Q0 t∈(0,T]
through the above process, the first region motion function corresponding to the region Q (t) where the interested position is located in the video picture can be finally obtained as follows:
Figure BDA0002652871350000111
wherein i circle And characterizing the state identifier, and T characterizing the cruise period.
Optionally, due to the above-mentioned law of change of the region of the interest location represented by the first region motion function in the video frame within one cruise period (e.g. 5 s), in an actual application scenario, the acquisition time of each image to be identified is usually the actual acquisition time (e.g. 8 am:
s108-1, determining each converted acquisition time according to the preset time and the cruise cycle corresponding to the initial detection area.
In the embodiment of the present invention, it is assumed that the preset time corresponding to the initial detection time is the acquisition time t 0 The collection time of the current picture to be identified is t' j According to the relation t = (t' j -t 0 ) mod (T), with an acquisition time of T' j The converted time instant.
And S108-2, determining a region corresponding to the interested position according to the converted acquisition time and the first region motion function.
In the embodiment of the present invention, the first region motion function is Q (t), and then the corresponding region Q of the interested location in the video picture at each acquisition time can be obtained according to the following relationship:
Figure BDA0002652871350000112
optionally, before the step of acquiring the operation parameters of the target camera and the initial detection area preset in the video frame, the method further includes: and determining one or more target cameras according to the actual geographic position.
It can be understood that, by performing each implementation step in the foregoing embodiments, all the captured or extracted pictures in a selected time period of the current device can be identified one by one, and all the images to be identified including the position of interest can be identified, so that the images including the position of interest are accurately retrieved from the huge number of images to be identified, and the images not including the position of interest are screened out, which reduces time consumption for subsequent image analysis and improves efficiency.
In order to implement the steps in the foregoing embodiments to achieve the corresponding technical effects, an implementation manner of a pedestrian feature extraction apparatus is provided below, an embodiment of the present invention further provides an image recognition method, referring to fig. 6, where fig. 6 is a functional block diagram of an image recognition apparatus provided in an embodiment of the present invention, where the image recognition apparatus 60 includes: an acquisition module 601, a determination module 602, and a recognition module 603.
The acquiring module 601 is configured to acquire a plurality of images to be identified corresponding to a target camera; each image to be identified corresponds to an acquisition moment;
a determining module 602, configured to determine, at each acquisition time, a region of the interested location in a video frame of the target camera according to a first region motion function corresponding to the interested location; the location of interest characterizes an actual geographic location of interest to the user; the first region motion function represents the change relation of a region of the interested position in a video picture along with time;
the identification module 603 is configured to identify, at the target acquisition time, the to-be-identified image matched with the target acquisition time as a target image including the interested position when there is an overlapping region between the region corresponding to the to-be-identified image matched with the target acquisition time and the region of the interested position located in the video picture; the target acquisition time is one or more of the acquisition times; the target image contains a location of interest.
It is understood that the obtaining module 601, the determining module 602 and the identifying module 603 may be used to perform steps S107 to S109 to achieve corresponding technical effects.
Alternatively, in order to implement the function of determining the dynamic function of the first region, a possible implementation manner is given below on the basis of fig. 6, referring to fig. 7, fig. 7 is a functional block diagram of another image recognition apparatus provided in an embodiment of the present invention, where the image recognition apparatus 60 includes a determining module 604;
the obtaining module 601 is further configured to obtain a sum of state identifiers of an initial detection area and a target camera preset in a video frame; the initial detection area contains a location of interest;
the judging module 604 is configured to judge whether the target camera is in a cruise state according to the state identifier;
the determining module 602 is further configured to determine, if not, the initial detection area as a first area motion function;
the obtaining module 601 is further configured to obtain a cruise state parameter of the target camera if the target camera is in the cruising state;
the determining module 602 is further configured to determine a second region motion function of the initial detection region, which varies with time, according to the cruise state parameter; a first region motion function is determined based on the initial detection region and the second region motion function.
It is understood that the obtaining module 601, the determining module 602, and the judging module 604 can be used to execute the steps S101-S106 to achieve the corresponding technical effect.
Optionally, the determining module 602 is specifically configured to: a second zone motion function is determined based on a plurality of preset waypoint coordinates, a cruise sequence, a cruise speed and a cruise period of the cruise status parameters.
Optionally, the determining module 602 is further specifically configured to: calculating a horizontal direction angle and an Euclidean distance corresponding to a cruising path between two preset navigation points adjacent in sequence according to the cruising sequence; determining the movement components of the initial detection area corresponding to different moments in a cruise cycle according to the cruise speed and the horizontal direction angle; and generating a second area motion function according to the coordinates and the movement components corresponding to the initial detection area.
Optionally, the determining module 602 is further specifically configured to: determining each converted acquisition time according to a preset time and a cruise period corresponding to the initial detection area; and determining the region corresponding to the interested position according to each converted acquisition time and the first region motion function.
Optionally, the determining module 602 is further specifically configured to: and determining one or more target cameras according to the actual geographic position.
It is to be appreciated that the determination module 602 can be utilized to perform steps S105-1 through S105-3, and steps S108-1 through S108-2 to achieve corresponding technical effects.
An embodiment of the present invention further provides an electronic device, as shown in fig. 8, and fig. 8 is a block diagram illustrating a structure of the electronic device according to the embodiment of the present invention. The electronic device 80 includes a communication interface 801, a processor 802, and a memory 803. The processor 802, memory 803 and communication interface 801 are in direct or indirect electrical communication with each other to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 803 may be used for storing software programs and modules, such as program instructions/modules corresponding to the image recognition method provided by the embodiment of the present invention, and the processor 802 executes various functional applications and data processing by executing the software programs and modules stored in the memory 803. The communication interface 801 may be used for communicating signaling or data with other node devices. The electronic device 80 may have a plurality of communication interfaces 801 in the present invention.
The memory 803 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a programmable read-only memory (PROM), an erasable read-only memory (EPROM), an electrically erasable read-only memory (EEPROM), and the like.
The processor 802 may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc.
It is understood that the various modules of the image recognition apparatus 60 described above may be stored in the memory 803 of the electronic device 80 in the form of software or Firmware (Firmware) and executed by the processor 802, and at the same time, data, codes of programs, etc. required for executing the modules may be stored in the memory 803.
An embodiment of the present invention provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the image recognition method according to any one of the foregoing embodiments. The computer readable storage medium may be, but is not limited to, various media that can store program codes, such as a usb disk, a removable hard disk, a ROM, a RAM, a PROM, an EPROM, an EEPROM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An image recognition method, characterized in that the method comprises:
acquiring a plurality of images to be identified corresponding to a target camera; each image to be identified corresponds to one acquisition moment;
at each acquisition moment, determining an area of the interested position in a video picture of the target camera according to a first area motion function corresponding to the interested position; the location of interest characterizes an actual geographic location of interest to the user; the first region motion function represents the time variation of the region of the interest position in the video picture, and is represented as:
Figure FDA0003827201590000011
wherein i circle The characterization state identifier is used for characterizing a cruise period; q 0 Representing an initial detection area in a monitoring picture; q 0 =(x min ,x max ,y min ,y max );(x min ,y min )、(x max ,y min )、(x min ,y max )、(x max ,y max ) Four vertex coordinates representing the initial detection area; Δ x (t), Δ y (t) indicate Q (t) with respect to the initial detection region Q 0 The x-axis and y-axis movement components of (a);
when at the target acquisition time, the region corresponding to the image to be identified matched with the target acquisition time and the region of the interested position in the video picture have an overlapping region, identifying the image to be identified matched with the target acquisition time as a target image; the target acquisition time is one or more of the acquisition times.
2. The image recognition method according to claim 1, wherein before the step of acquiring a plurality of images to be recognized corresponding to the target camera, the method further comprises:
acquiring an initial detection area preset in the video picture and the state of the target camera to identify that the initial detection area contains the interesting position;
judging whether the target camera is in a cruising state or not according to the state identifier;
if not, taking the initial detection area as a first area motion function corresponding to the interesting position;
if so, acquiring the cruise state parameters of the target camera; determining a second area motion function of the initial detection area along with the change of time according to the cruise state parameters; determining the first region motion function according to the initial detection region and the second region motion function.
3. The image recognition method according to claim 2, wherein the step of determining a second region motion function of the initial detection region over time according to the cruise status parameters of the target camera comprises:
determining a second zone motion function based on a plurality of preset waypoint coordinates, a cruise sequence, a cruise speed and a cruise period of the cruise status parameter.
4. An image recognition method according to claim 3, characterized in that said step of determining a second zone movement function as a function of a plurality of preset waypoint coordinates, a cruise sequence, a cruise speed and a cruise cycle of said cruise status parameter comprises:
calculating a horizontal direction angle and an Euclidean distance corresponding to a cruising path between two preset navigation points adjacent in sequence according to the cruising sequence;
determining the movement components of the initial detection area corresponding to different moments in the cruise cycle according to the cruise speed and the horizontal direction angle;
and generating the second area motion function according to the coordinates corresponding to the initial detection area and the movement component.
5. The image recognition method according to claim 3, wherein the step of determining, at each of the capturing moments, the region of interest where the position of interest is located in the video frame of the target camera according to the first region motion function corresponding to the position of interest includes:
determining each converted acquisition time according to a preset time corresponding to the initial detection area and the cruise period;
and determining the region corresponding to the interested position according to each converted acquisition time and the first region motion function.
6. The image recognition method according to claim 2, wherein before the step of acquiring the operation parameters of the target camera and the preset initial detection area in the video picture, the method further comprises:
and determining the target cameras according to the actual geographic positions, wherein the target cameras are one or more.
7. An image recognition apparatus, comprising:
the acquisition module is used for acquiring a plurality of images to be identified corresponding to the target camera; each image to be identified corresponds to one acquisition moment;
the determining module is used for determining the region of the interested position in the video picture of the target camera according to the first region motion function corresponding to the interested position at each acquisition moment; the location of interest characterizes an actual geographic location of interest to the user; the first region motion function represents the change relation of the region of the interested position in the video picture along with time; the first region motion function is expressed as:
Figure FDA0003827201590000031
wherein i circle The characterization state identifier is used for characterizing a cruise period; q 0 Representing an initial detection area in a monitoring picture; q 0 =(x min ,x max ,y min ,y max );(x min ,y min )、(x max ,y min )、(x min ,y max )、(x max ,y max ) Four vertex coordinates representing the initial detection area; Δ x (t), Δ y (t) indicate Q (t) with respect to the initial detection region Q 0 X-axis, y-axis movement components;
the identification module is used for identifying the image to be identified matched with the target acquisition time as a target image containing the interested position when the region corresponding to the image to be identified matched with the target acquisition time and the region of the interested position in the video picture have an overlapping region at the target acquisition time; the target acquisition time is one or more of the acquisition times.
8. The image recognition device according to claim 7, further comprising a judgment module;
the acquisition module is further used for acquiring an initial detection area preset in the video picture and a state identifier sum of the target camera; the initial detection area contains the location of interest;
the judging module is used for judging whether the target camera is in a cruising state or not according to the state identifier;
the determination module is further configured to determine, if the determination result is negative, the initial detection area as a first area motion function corresponding to the position of interest;
the obtaining module is further used for obtaining the cruise state parameters of the target camera if the cruise state parameters are the same as the cruise state parameters;
the determination module is further used for determining a second area motion function of the initial detection area along with time according to the cruise state parameters; and determining the first region motion function according to the initial detection region and the second region motion function.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the image recognition method of any one of claims 1-6.
10. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the image recognition method of any one of claims 1 to 6.
CN202010876946.XA 2020-08-27 2020-08-27 Image recognition method and device, electronic equipment and storage medium Active CN111950520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010876946.XA CN111950520B (en) 2020-08-27 2020-08-27 Image recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010876946.XA CN111950520B (en) 2020-08-27 2020-08-27 Image recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111950520A CN111950520A (en) 2020-11-17
CN111950520B true CN111950520B (en) 2022-12-02

Family

ID=73366480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010876946.XA Active CN111950520B (en) 2020-08-27 2020-08-27 Image recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111950520B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926580B (en) * 2021-03-29 2023-02-03 深圳市商汤科技有限公司 Image positioning method and device, electronic equipment and storage medium
CN116952166B (en) * 2023-09-20 2023-12-08 菲特(天津)检测技术有限公司 Method, device, equipment and medium for detecting parts of automobile door handle assembly

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605248A (en) * 2009-07-10 2009-12-16 浙江林学院 Remote video monitoring synchronous tracking method for forest fire
CN101888479A (en) * 2009-05-14 2010-11-17 汉王科技股份有限公司 Method and device for detecting and tracking target image
CN103020983A (en) * 2012-09-12 2013-04-03 深圳先进技术研究院 Human-computer interaction device and method used for target tracking
CN106162070A (en) * 2015-04-23 2016-11-23 神讯电脑(昆山)有限公司 Safety monitoring system and method thereof
CN106331653A (en) * 2016-09-29 2017-01-11 浙江宇视科技有限公司 Method and apparatus for locating panorama camera sub-picture display area
CN107018310A (en) * 2016-10-08 2017-08-04 罗云富 Possess the self-timer method and self-timer of face function
CN107944351A (en) * 2017-11-07 2018-04-20 深圳市易成自动驾驶技术有限公司 Image-recognizing method, device and computer-readable recording medium
CN108537094A (en) * 2017-03-03 2018-09-14 株式会社理光 Image processing method, device and system
CN109063603A (en) * 2018-07-16 2018-12-21 深圳地平线机器人科技有限公司 Image prediction method and apparatus and electronic equipment based on regional dynamics screening
CN109522793A (en) * 2018-10-10 2019-03-26 华南理工大学 More people's unusual checkings and recognition methods based on machine vision
CN110147796A (en) * 2018-02-12 2019-08-20 杭州海康威视数字技术股份有限公司 Image matching method and device
CN110177258A (en) * 2019-06-28 2019-08-27 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110390286A (en) * 2019-07-17 2019-10-29 重庆紫光华山智安科技有限公司 Vehicle tracing test data production method, device, prediction and result test method
CN111552076A (en) * 2020-05-13 2020-08-18 歌尔科技有限公司 Image display method, AR glasses and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8666117B2 (en) * 2012-04-06 2014-03-04 Xerox Corporation Video-based system and method for detecting exclusion zone infractions

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101888479A (en) * 2009-05-14 2010-11-17 汉王科技股份有限公司 Method and device for detecting and tracking target image
CN101605248A (en) * 2009-07-10 2009-12-16 浙江林学院 Remote video monitoring synchronous tracking method for forest fire
CN103020983A (en) * 2012-09-12 2013-04-03 深圳先进技术研究院 Human-computer interaction device and method used for target tracking
CN106162070A (en) * 2015-04-23 2016-11-23 神讯电脑(昆山)有限公司 Safety monitoring system and method thereof
CN106331653A (en) * 2016-09-29 2017-01-11 浙江宇视科技有限公司 Method and apparatus for locating panorama camera sub-picture display area
CN107018310A (en) * 2016-10-08 2017-08-04 罗云富 Possess the self-timer method and self-timer of face function
CN108537094A (en) * 2017-03-03 2018-09-14 株式会社理光 Image processing method, device and system
CN107944351A (en) * 2017-11-07 2018-04-20 深圳市易成自动驾驶技术有限公司 Image-recognizing method, device and computer-readable recording medium
CN110147796A (en) * 2018-02-12 2019-08-20 杭州海康威视数字技术股份有限公司 Image matching method and device
CN109063603A (en) * 2018-07-16 2018-12-21 深圳地平线机器人科技有限公司 Image prediction method and apparatus and electronic equipment based on regional dynamics screening
CN109522793A (en) * 2018-10-10 2019-03-26 华南理工大学 More people's unusual checkings and recognition methods based on machine vision
CN110177258A (en) * 2019-06-28 2019-08-27 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110390286A (en) * 2019-07-17 2019-10-29 重庆紫光华山智安科技有限公司 Vehicle tracing test data production method, device, prediction and result test method
CN111552076A (en) * 2020-05-13 2020-08-18 歌尔科技有限公司 Image display method, AR glasses and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频图像的车速精确计算数学模型研究;倪志海等;《公路交通科技(应用技术版)》;20150315(第03期);297-300 *

Also Published As

Publication number Publication date
CN111950520A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
US9934453B2 (en) Multi-source multi-modal activity recognition in aerial video surveillance
US9363489B2 (en) Video analytics configuration
US9471849B2 (en) System and method for suspect search
CN108875542B (en) Face recognition method, device and system and computer storage medium
CN108875723B (en) Object detection method, device and system and storage medium
US11699283B2 (en) System and method for finding and classifying lines in an image with a vision system
CN109727275B (en) Object detection method, device, system and computer readable storage medium
US20130243343A1 (en) Method and device for people group detection
EP3373201A1 (en) Information processing apparatus, information processing method, and storage medium
CN111950520B (en) Image recognition method and device, electronic equipment and storage medium
CN111797653A (en) Image annotation method and device based on high-dimensional image
Benito-Picazo et al. Deep learning-based video surveillance system managed by low cost hardware and panoramic cameras
CN106663196A (en) Computerized prominent person recognition in videos
US10762372B2 (en) Image processing apparatus and control method therefor
JP7295213B2 (en) Signal light position determination method, device, storage medium, program, roadside equipment
CN111563398A (en) Method and device for determining information of target object
CN111832579B (en) Map interest point data processing method and device, electronic equipment and readable medium
CN103327251B (en) A kind of multimedia photographing process method, device and terminal equipment
KR101804471B1 (en) Method And Apparatus for Analyzing Video
JP2009123150A (en) Object detection apparatus and method, object detection system and program
CN112037255A (en) Target tracking method and device
US20230156159A1 (en) Non-transitory computer-readable recording medium and display method
CN113628251B (en) Smart hotel terminal monitoring method
CN114882073A (en) Target tracking method and apparatus, medium, and computer device
CN115393755A (en) Visual target tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant