CN111738152B - Image determining method and device, storage medium and electronic device - Google Patents

Image determining method and device, storage medium and electronic device Download PDF

Info

Publication number
CN111738152B
CN111738152B CN202010575238.2A CN202010575238A CN111738152B CN 111738152 B CN111738152 B CN 111738152B CN 202010575238 A CN202010575238 A CN 202010575238A CN 111738152 B CN111738152 B CN 111738152B
Authority
CN
China
Prior art keywords
image
target object
target
images
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010575238.2A
Other languages
Chinese (zh)
Other versions
CN111738152A (en
Inventor
陈伟国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010575238.2A priority Critical patent/CN111738152B/en
Publication of CN111738152A publication Critical patent/CN111738152A/en
Application granted granted Critical
Publication of CN111738152B publication Critical patent/CN111738152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image determining method, an image determining device, a storage medium and an electronic device. Wherein the method comprises the following steps: and acquiring target parameters of a first target object in each image included in a group of images, and determining the first target image from the group of images based on the target parameters of the first target object in each image and the state information of the first target object in each image. The invention solves the technical problem that the optimal moment of the target object is difficult to effectively select in the related technology.

Description

Image determining method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to a method and apparatus for determining an image, a storage medium, and an electronic apparatus.
Background
In the current video structured products, a target object is often selected at the optimal time in a group of pictures, and then corresponding processing is performed. In the prior art, by configuring a trigger line, when the position of the target object in the image passes through the trigger line for the first time, the target object is the optimal target object, but the prior art has a certain limitation, for example, if the target object exactly crosses and overlaps with another target object when passing through the trigger line, shielding is caused, and the target information is not complete. Furthermore, for relatively complex scenarios, the configuration of the trigger line depends on the level of manual skill, with little operational space and limited play space.
Aiming at the problem that the optimal moment of the target object is difficult to effectively select in the related art, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides an image determining method, an image determining device, a storage medium and an electronic device, which are used for at least solving the technical problem that the optimal moment of a target object is difficult to effectively select in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a method for determining an image, including: acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target attitude parameters and target relative position parameters of the first target object; and determining a first target image from the group of images based on target parameters of the first target object in the images and state information of the first target object in the images, wherein the state information is used for indicating the existence state and state duration information of the first target object.
According to another aspect of the embodiment of the present invention, there is also provided an apparatus for determining an image, including: the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target posture parameters and target relative position parameters of the first target object; the determining module is used for determining a first target image from the group of images based on target parameters of the first target object in the images and state information of the first target object in the images, wherein the state information is used for indicating the existence state and state duration information of the first target object.
According to a further aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-described image determining method when run.
According to still another aspect of the embodiments of the present invention, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the method for determining an image according to the above-mentioned method.
In the embodiment of the invention, the method for determining the first target image from the group of images based on the target parameters of the first target object in the images and the state information of the first target object in the images is adopted to acquire the target parameters of the first target object in the images included in the group of images, so that the technical scheme of searching the optimal moment of the target object in the group of images through the trigger line in the prior art is replaced, the problem that the optimal moment of the target object is difficult to effectively select in the related art is solved, the artificial error caused by the trigger line is reduced, the situation that important information is possibly blocked at the trigger line is reduced, and the identification effect of the target object is further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic illustration of an application environment of an alternative image determination method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method of determining an image according to an embodiment of the invention;
FIG. 3 is a flow chart of another alternative method of image determination according to an embodiment of the present invention;
FIG. 4 is a flow chart of yet another alternative method of image determination according to an embodiment of the present invention;
FIG. 5 is a flow chart of yet another alternative method of image determination according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative image determining apparatus according to an embodiment of the present invention;
Fig. 7 is a schematic structural diagram of an alternative electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiment of the present invention, there is provided a method for determining an image, optionally, as an optional implementation manner, the method for determining an image may be applied, but is not limited to, in the environment shown in fig. 1.
In an alternative embodiment, the above-described method of determining an imageable image is performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an image determining method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1 or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for determining an image in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, to implement the above-described method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of networks described above may include wireless networks provided by the communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Fig. 2 is a flow chart of a method for determining an image according to an embodiment of the present invention, as shown in fig. 2, the flow includes the following steps:
S202, acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target attitude parameters and target relative position parameters of the first target object;
S204, determining a first target image from the group of images based on target parameters of the first target object in the images and state information of the first target object in the images, wherein the state information is used for indicating the existence state and state duration information of the first target object.
Alternatively, in this embodiment, the first target object may include, but is not limited to, a person, an animal, a vehicle, and any target object that appears in the video image and can be identified from the video image, the number of the first target objects may be one or more, and when the number of the first target objects is more, the same or different first target image is determined for the plurality of target objects by traversing each target object. The foregoing is merely an example, and the present invention is not limited in any way, and the number and kind of the specific first target objects are not limited.
Alternatively, in this embodiment, the set of images may include, but is not limited to, a set of images formed by respective frame images in a video image, and may also include, but is not limited to, a set of images formed by partial frame images in a video image. The above-mentioned target integrity parameter is used to represent the integrity of the target object in a picture, for example, full occlusion, half occlusion, based on a percentage value occlusion, etc., the above-mentioned target pose parameter is used to represent the pose of the target object in a picture, for example, the target object is front, side, back, etc., the above-mentioned target relative position parameter is used to represent the relative position of the above-mentioned target object in a picture, for example, the target object is at the upper edge, middle, lower edge, etc., the above-mentioned is merely an example, and the meaning represented by the specific target integrity parameter, the target pose parameter, and the target relative position parameter may also include one or more combinations of the above-mentioned, which are not limited in this disclosure.
Alternatively, in the present embodiment, the above-described state information is used to identify the state of the target object, for example, the existence state of the target object in the preset processing model, state duration information of the target object itself, and the like.
Optionally, in this embodiment, fig. 3 is a schematic flow chart of an optional image determining method according to an embodiment of the present invention, and a specific flow is as follows:
S302, starting;
s304, carrying out integrity judgment (corresponding to the target integrity parameter) on the target object in the input image;
s306, performing gesture judgment (corresponding to the target gesture parameters) on the target object in the input image;
S308, scoring a target object in the input image (corresponding to the target parameter) and outputting a target image (corresponding to the first target image);
s310, ending.
According to the invention, the target parameters of the first target object in each image included in the group of images are obtained, the first target image is determined from the group of images based on the target parameters of the first target object in each image and the state information of the first target object in each image, the technical scheme that the optimal moment of the target object in the group of images is searched through the trigger line in the prior art is replaced, the problem that the optimal moment of the target object is difficult to effectively select in the related art is solved, the artificial error caused by the trigger line is reduced, the situation that important information is possibly blocked at the trigger line is also reduced, and the identification effect of the target object is further improved.
In an alternative embodiment, acquiring target integrity parameters of a first target object in images included in a set of images includes: for any one of the set of images, performing the following operations to obtain target integrity parameters of the first target object in each of the images included in the set of images: acquiring coordinate information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target integrity parameter of the first target object in the first image based on the coordinate information and an overlap state, wherein the overlap state is used for indicating whether the first target object and other objects are overlapped in the first image when the first image comprises the other objects; and determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is not present under the condition that the first image is determined not to comprise the first target object.
Optionally, in this embodiment, in the case where it is determined that the first target object is included in the first image included in the set of images, coordinate information of the first target object in the first image is acquired; determining the target integrity parameter of the first target object in the first image based on the coordinate information and the overlapping state may include, but is not limited to, first determining whether the first target object is included in the first image, for example, inputting the first target object in the same coordinate system as the first image, determining whether the first target object is completely included in the coordinate range of the first image, and determining that the first image includes the first target object in a case where the coordinate range of the first image includes the first target object.
In an alternative embodiment, determining the target integrity parameter of the first target object in the first image based on the coordinate information and the overlapping state of the first image and other images comprises: determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is in an incomplete state, in the case that the first target object is located at an image boundary of the first image based on the coordinate information; determining that a target integrity parameter of the first target object in the first image is a parameter for indicating that the first target object is in an incomplete state when the first target object is located inside an image of the first image based on the coordinate information and the first target object overlaps with the other objects existing; determining that a target integrity parameter of the first target object in the first image is a parameter for indicating that the first target is in a complete state when the first target object is located inside the image based on the coordinate information and no other object exists or the first target object and the existing other object do not overlap; the image boundary is a pre-divided area which is positioned at the edge of the first image and cannot completely display the first target object, and the image interior is an area which is included in the first image and is except the image boundary.
Optionally, in this embodiment, the target integrity parameter may be identified by 0 or 1, a parameter value for indicating that the first target object is in an incomplete state is set to 0, a parameter value for indicating that the first target object is in an complete state is set to 1, the image boundary may be set manually in advance by a system or a server, or may be determined by an artificial intelligence image recognition method, the image boundary may include, but is not limited to, placing the image in a coordinate system, an edge with the highest ordinate value of the image is an upper boundary of the image, an edge with the lowest ordinate value of the image is a lower boundary of the image, and determining, based on coordinate information of the first target object, that is, the ordinate of each point of the first target object located inside the image of the first image is greater than the lower boundary coordinate and less than the upper boundary coordinate.
Alternatively, in the present embodiment, the existence of the first target object overlapping with the other objects existing may include, but is not limited to, the coordinates of one or more of all coordinate points of the first target object in the coordinate system being the same as the coordinates of one or more of all coordinate points of the other objects.
In an alternative embodiment, acquiring target pose parameters of a first target object in images included in a set of images includes: for any one of the set of images, performing the following operations to obtain target pose parameters of the first target object in each image included in the set of images: acquiring motion trail coordinates of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target attitude parameter of the first target object in the first image based on the motion trail coordinates; the motion trail coordinates are differences between coordinates of the first target object in the first image and coordinates of the first target object in a second image, and the second image is an image which is included in the group of images and is located in a frame before the first image.
Alternatively, in this embodiment, the above-described acquisition of the target posture parameter of the first target object in each image included in the set of images includes determining the movement direction of the first target object based on the movement locus of the first target object, thereby determining the posture of the first target object. For example, setting an initial pose of the first target object as unknown (unknow); firstly, subtracting a track point A of the first target object in the current frame image from a track point B of the first target object in the previous frame to obtain a point C, and when the ordinate of the point C is 0, the gesture is a side; when the abscissa of C is 0, the posture is front (front) if the ordinate is 0 or less; if the ordinate is greater than 0, the pose is back.
In an alternative embodiment, determining a target pose parameter of the first target object in the first image based on the motion trajectory coordinates includes: determining a target attitude parameter of the first target object in the first image as a parameter for indicating that the first target object is in a side state under the condition that the ordinate of the motion trail coordinate is 0; determining a target attitude parameter of the first target object in the first image as a parameter for indicating that the first target object is in a front state under the condition that the abscissa of the motion trail coordinate is 0 and the ordinate is less than or equal to 0; and under the condition that the abscissa of the motion trail coordinate is 0 and the ordinate is greater than 0, determining the target posture parameter of the first target object in the first image as a parameter for indicating that the first target object is in a back state.
Optionally, in this embodiment, based on the quadrant where the coordinate of C is located, the angle θ between the line connecting the C and the origin and the horizontal line is calculated, where the calculation formula is θ= atanabs ((C.y)/abs (C.x)), where abs ((C.y) represents the absolute value of the ordinate of the point C, abs (C.x) represents the absolute value of the abscissa of the point C, and finally the target pose parameter of the first target object in the current input image is determined according to table 1.
TABLE 1 gesture correspondence table
According to the method, the accuracy of the target determination method can be improved, the method can be suitable for different scenes, different target object postures can be selected for subsequent processing based on different actual requirements, for example, when the target object to be determined is a person and a motor vehicle, face detection possibly needs to be carried out after the selection is needed, so that when the first target object is the person, the front weight is maximum, the side weight is secondary, the back weight is minimum, and the vehicle considers the license plate reason, the front weight is maximum, the back weight is secondary, and the side weight is minimum.
In an alternative embodiment, acquiring the target relative position parameter of the first target object in each image included in the set of images includes: for any one of the set of images, performing the following operations to obtain target relative position parameters of the first target object in each of the images included in the set of images: acquiring relative position information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target relative position parameter of the first target object in the first image based on the relative position information; wherein the relative position information is a distance of the first target object in the first image from a lower edge of the first image.
Optionally, in this embodiment, the position of the first target object relative to the image is the sharpest target at the lower edge, so that the target object is easier to be identified in the subsequent image identification process, and accuracy can be ensured. And the target at the upper edge is too small in size and difficult to guarantee in accuracy, so that the position of the ordinate of the target relative to the image is considered when the relative position parameters of the target are determined, and the relative position parameters of the target of the first target object in different pictures are determined, thereby being beneficial to subsequent image processing and classification.
In an alternative embodiment, determining a target relative position parameter of the first target object in the first image based on the relative position information comprises: determining a target relative position parameter of the first target object in the first image as a parameter for indicating that the first target object is in a lower edge state under the condition that the relative position information is determined to be used for indicating that the first target object is in a lower edge state in the first image and the lower edge of the first image is smaller than or equal to a first threshold value; determining a target relative position parameter of the first target object in the first image as a parameter for indicating that the first target object is in a non-edge area state under the condition that the relative position information is determined to be used for indicating that the distance between the first target object in the first image and the lower edge of the first image is larger than a first threshold value and smaller than or equal to a second threshold value; and determining that the target relative position parameter of the first target object in the first image is a parameter for indicating that the first target object is in an upper edge state under the condition that the relative position information is determined to be used for indicating that the distance between the first target object in the first image and the lower edge of the first image is larger than a second threshold value.
Alternatively, in this embodiment, the first threshold value and the second threshold value may be preset by a system or a server, for example, when the distance from the lower edge of the image is less than five pixel distances, the target relative position parameter is determined as a parameter for indicating that the first target object is in the lower edge state, when the distance from the lower edge of the image is greater than five unit distances and less than five unit distances from the upper edge of the image, the target relative position parameter is determined as a parameter for indicating that the first target object is in the non-edge region state, and when the distance from the upper edge of the image is less than five unit distances, the target relative position parameter is determined as a parameter for indicating that the first target object is in the upper edge state.
According to the embodiment, different target relative position parameters can be configured for the first target object based on different positions of the first target object relative to the image, so that the image which can better show the best relative position of the first target object in a group of pictures can be selected, and the technical effect of image determination is further improved.
In an alternative embodiment, determining the first target image from the set of images based on the target parameters of the first target object in the images and the state information of the first target object in the images includes: determining weight values respectively corresponding to the pre-configured target integrity parameter, the target attitude parameter and the target relative position parameter; determining a total parameter value of the first target object in each image according to the target integrity parameter and the corresponding weight value of the first target object in each image, the target attitude parameter and the corresponding weight value of the first target object, and the weight value of the target relative position parameter machine; the first target image is determined based on the total parameter value of the first target object in each image and the state information of the first target object in each image.
Optionally, in this embodiment, determining the total parameter value of the first target object in the individual image is implemented by assigning different weight values to the above-mentioned target integrity parameter, target pose parameter, and target relative position parameter. For example, the target integrity parameter is assigned a weight value of 40%, the target pose parameter is assigned a weight value of 40%, and the target relative position parameter is assigned a weight value of 20%, so that the total parameter value may be, but not limited to, the target integrity parameter×40% + the target pose parameter×40% + the target relative position parameter×20%, the above calculation method is merely an example, and the specific calculation method and the weight value may be adjusted accordingly according to the actual situation.
In an alternative embodiment, before determining the first target image from the set of images based on the target parameters of the first target object in the images and the state information of the first target object in the images, the method further comprises: for any one of the set of images, performing the following operations to obtain status information of the first target object in each image included in the set of images: determining state information of the first target object in a first image included in the set of images as information for indicating a creation state when the first target object appears for the first time in the first image; determining state information of the first target object as information indicating an update state in a case where the first target object appears in a first image included in the set of images and also appears in a second image included in the set of images, the second image being an image included in the set of images and located in a frame preceding the first image; determining state information of the first target object as information for indicating a lost state in a case where the first target object appears in a first image included in the set of images, does not appear in a second image included in the set of images, which is an image included in the set of images and is located in a frame preceding the first image, and appears in a third image included in the set of images, which is an image included in the second image; in a case where the first target object does not appear in a first image included in the set of images and none of the consecutive predetermined number of frame images included in the first image are present, determining state information of the first target object as information for indicating a deletion state, wherein a last frame image of the consecutive predetermined number of frame images is a previous frame image of the first image.
According to the method and the device for processing the binary tree model, the state information of the first target object in each image included in the group of images can be determined, different corresponding operations are executed in the binary tree model based on different state information of the first target object in each image included in the group of images, the binary tree model can be suitable for different scenes, and the states of different target objects can be selected for subsequent processing based on different actual requirements.
In an alternative embodiment, determining the first target image based on the total parameter value of the first target object in each image and the state information of the first target object in each image includes: determining whether historical record information of the first target object is recorded in a preset binary tree model or not under the condition that state information of the first target object in a currently input image is a creation state or an update state, wherein the historical record information records a historical highest total parameter value of the first target object in the binary tree model; when the history record information is recorded in the binary tree model and the total parameter value of the first target object in the current input image is larger than the highest historical parameter value recorded in the binary tree model by the first target object, updating first history record information included in the history record information based on the record information of the first target object in the current input image, wherein the updated first history record information comprises the total parameter value of the first target object in the current input image, the occurrence times of the first target object in a group of images, the reference base number and the index value of the current input image; the history record information is recorded in the binary tree model, and when the total parameter value of the first target object in the current input image is smaller than or equal to the highest history total parameter value, second history record information contained in the history record information is updated based on the record information of the first target object in the current input image, and the updated second history record information comprises the occurrence times of the first target object in a group of images, the reference base number and the index value of the current input image; determining the record information of the current input image as the history information of the first target object in the binary tree model under the condition that the history information is not recorded in the binary tree model; under the condition that the state information of the first target object in the current input image is in a lost state, updating third historical record information included in the historical record information based on the record information of the first target object in the current input image, wherein the updated third historical record information comprises the occurrence times of the first target object in a group of images, and the index value of the current input image; determining an image corresponding to a historical highest total parameter value recorded by the binary tree model based on the adjusted historical record information, and determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image; and under the condition that the state information of the first target object in the currently input image is in a deleting state, determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image.
Optionally, in this embodiment, the history information may include, but is not limited to, the highest historical total parameter value, a reference base of the current input image, an index value of the current input image, a number of occurrences of the first target object in a group of images, where the reference base of the current input image is used to indicate that, when there are a plurality of first target objects, the current input image is a target image corresponding to a highest parameter value of one or more target objects, the number of target objects is the reference base, the index value is used to identify a target image, in a process of outputting a target image, the target image is searched through the index value to implement output of the target image, the number of occurrences of the first target object in a group of images may be used to set an output condition, the number of occurrences may be used as an output threshold, for example, after the first target object occurs 10 times, an image corresponding to a total parameter value of the target object in ten images is output, and the number of occurrences may be set to 0, which is used to indicate that the target object appears or is not the highest in the group of target images.
The above is merely an example, and the specific recorded content of the history information may include one or more of the above in combination.
Optionally, in this embodiment, in a case where the state information of the first target object in the currently input image is a creation state or an update state, the history information about the first target object in the binary tree model is updated when the total parameter value of the first target object in the currently input image is greater than the historical highest total parameter value recorded by the first target object in the binary tree model, the highest total parameter value up to the current input image is recorded in the new history information, and when the total parameter value of the first target object in the currently input image is less than or equal to the historical highest total parameter value, the reference base and the index value about the current input image recorded in the binary tree model and the number of occurrences of the first target object in a group of images are updated, and the total parameter value corresponding to the target object is not updated. Under the condition that the state information of a first target object in a current input image is in a lost state, updating the occurrence times of the first target object in a group of images included in history record information based on the record information of the first target object in the current input image, determining an image corresponding to the highest historical total parameter value recorded by a binary tree model based on the adjusted history record information, and determining the image corresponding to the highest historical total parameter value recorded by the binary tree model as the first target image; and under the condition that the state information of the first target object in the currently input image is in a deleting state, determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image.
Optionally, in this embodiment, fig. 4 is a schematic flow chart of another alternative image determining method according to an embodiment of the present invention, as shown in fig. 4, the steps of the flow are as follows:
S402, starting;
S404, determining a target (corresponding to the target object) state, if the target state is a Create or Update state, jumping to step S406, if the target state is a Lost state, jumping to step S414, and if the target state is a Delete state, jumping to step S416;
S406, in the case that the target state is the Create or Update state, confirming whether the target has been inserted into the binary tree, jumping to step S408 after determining that the target has been inserted into the binary tree, jumping to step S412 after determining that the target has not been inserted into the binary tree;
S408, after determining that the target has been inserted into the binary tree, comparing the target score (corresponding to the total parameter value) with the historical highest score (corresponding to the historical highest total parameter value) of the binary tree record, and jumping to step S410 if the target score is confirmed to be higher than the score of the binary tree record, otherwise jumping to step S414;
S410, under the condition that the target score is higher than the score recorded by the binary tree, updating target information, adding 1 to the reference base of the current image, and subtracting 1 to the reference base of the image corresponding to the highest score before the target;
s412, recording current target information (corresponding to the history information) and inserting the target information into the binary tree;
s414, the number of occurrences of the object is increased by 1, the index value of the object frame (first object image) is updated to the index value of the current frame (current input image), and step S418 is performed;
S416, confirming whether the number of times of occurrence of the target reaches the picking threshold
S418, determining a first target image corresponding to the target object under the condition that the target occurrence number reaches the mapping threshold value, and repeatedly executing the process under the condition that the target occurrence number does not reach the mapping threshold value;
s420, ending.
According to the method and the device for identifying the target object, the target image corresponding to the highest historical total parameter value recorded in the binary tree model can be ensured to be the first target image, the target image with the best effect of the target in the whole life cycle or the preset cycle is selected, the problem that the optimal moment of the target object is difficult to effectively select in the related technology is solved, the artificial error caused by a trigger line is reduced, the situation that important information is possibly blocked at the trigger line is also reduced, and the identification effect of the target object is further improved.
In an alternative embodiment, the method further comprises: determining state continuous information of the first target object, determining an image corresponding to a historical highest total parameter value recorded by the binary tree model as the first target image under the condition that the state continuous information is in a motion state, and subtracting 1 from a reference base of the first target image; subtracting 1 from the reference base of the first target image in the case that the state continuation information is in a still state; and deleting the first target image under the condition that the reference base of the first target image is 0.
Optionally, in this embodiment, the state duration information may be preset by a system or a server, or may be obtained based on an existing algorithm, by acquiring a reference base of a target image, subtracting 1 from the reference base of the target image after screening the target image, and deleting the first target image if the reference base of the first target image is 0, where the reference base is 0 is that the target image is applied to a non-target object.
Optionally, in this embodiment, fig. 5 is a schematic flow chart of another alternative image determining method according to an embodiment of the present invention, as shown in fig. 5, the steps of the flow are as follows:
S502, starting;
S504, determining whether the target moves (state continuation information corresponding to the aforementioned target object);
S506, when the target is a static target, canceling to output a first target image, and subtracting 1 from the reference base of the target image;
S508, when the target is a moving target, outputting a first target image, and subtracting 1 from the reference base of the target image;
S510, ending.
According to the method and the device for selecting the best image, the reference base of the target image is established, whether the image is deleted or not is determined based on the reference base of the target image, so that the cache space can be saved, the calculation cost is reduced, the use experience of a user is optimized, and the efficiency of selecting the best image is further improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
According to another aspect of the embodiment of the present invention, there is also provided an image determining apparatus for implementing the above image determining method. As shown in fig. 6, the apparatus includes:
An obtaining module 602, configured to obtain target parameters of a first target object in each image included in a set of images, where the target parameters include a target integrity parameter, a target pose parameter, and a target relative position parameter of the first target object;
a determining module 604, configured to determine a first target image from the set of images based on a target parameter of the first target object in each image and state information of the first target object in each image, where the state information is used to indicate a presence state and state duration information of the first target object.
In an alternative embodiment, the obtaining module 602 is configured to obtain the target integrity parameter of the first target object in each image included in the set of images by: for any one of the set of images, performing the following operations to obtain target integrity parameters of the first target object in each of the images included in the set of images: acquiring coordinate information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target integrity parameter of the first target object in the first image based on the coordinate information and an overlap state, wherein the overlap state is used for indicating whether the first target object and other objects are overlapped in the first image when the first image comprises the other objects; and determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is not present under the condition that the first image is determined not to comprise the first target object.
In an alternative embodiment, the obtaining module 602 is configured to determine the target integrity parameter of the first target object in the first image based on the coordinate information and the overlapping state of the first image and the other images by: determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is in an incomplete state, in the case that the first target object is located at an image boundary of the first image based on the coordinate information; determining that a target integrity parameter of the first target object in the first image is a parameter for indicating that the first target object is in an incomplete state when the first target object is located inside an image of the first image based on the coordinate information and the first target object overlaps with the other objects existing; determining that a target integrity parameter of the first target object in the first image is a parameter for indicating that the first target is in a complete state when the first target object is located inside the image based on the coordinate information and no other object exists or the first target object and the existing other object do not overlap; the image boundary is a pre-divided area which is positioned at the edge of the first image and cannot completely display the first target object, and the image interior is an area which is included in the first image and is except the image boundary.
In an alternative embodiment, the acquiring module 602 is configured to acquire the target pose parameter of the first target object in each image included in the set of images by: for any one of the set of images, performing the following operations to obtain target pose parameters of the first target object in each image included in the set of images: acquiring motion trail coordinates of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target attitude parameter of the first target object in the first image based on the motion trail coordinates; the motion trail coordinates are differences between coordinates of the first target object in the first image and coordinates of the first target object in a second image, and the second image is an image which is included in the group of images and is located in a frame before the first image.
In an alternative embodiment, the obtaining module 602 is configured to determine the target pose parameter of the first target object in the first image based on the motion trajectory coordinates by: determining a target attitude parameter of the first target object in the first image as a parameter for indicating that the first target object is in a side state under the condition that the ordinate of the motion trail coordinate is 0; determining a target attitude parameter of the first target object in the first image as a parameter for indicating that the first target object is in a front state under the condition that the abscissa of the motion trail coordinate is 0 and the ordinate is less than or equal to 0; and under the condition that the abscissa of the motion trail coordinate is 0 and the ordinate is greater than 0, determining the target posture parameter of the first target object in the first image as a parameter for indicating that the first target object is in a back state.
In an alternative embodiment, the acquiring module 602 is configured to acquire the target relative position parameter of the first target object in each image included in the set of images by: for any one of the set of images, performing the following operations to obtain target relative position parameters of the first target object in each of the images included in the set of images: acquiring relative position information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target relative position parameter of the first target object in the first image based on the relative position information; wherein the relative position information is a distance of the first target object in the first image from a lower edge of the first image.
In an alternative embodiment, the obtaining module 602 is configured to determine the target relative position parameter of the first target object in the first image based on the relative position information by: determining a target relative position parameter of the first target object in the first image as a parameter for indicating that the first target object is in a lower edge state under the condition that the relative position information is determined to be used for indicating that the first target object is in a lower edge state in the first image and the lower edge of the first image is smaller than or equal to a first threshold value; determining a target relative position parameter of the first target object in the first image as a parameter for indicating that the first target object is in a non-edge area state under the condition that the relative position information is determined to be used for indicating that the distance between the first target object in the first image and the lower edge of the first image is larger than a first threshold value and smaller than or equal to a second threshold value; and determining that the target relative position parameter of the first target object in the first image is a parameter for indicating that the first target object is in an upper edge state under the condition that the relative position information is determined to be used for indicating that the distance between the first target object in the first image and the lower edge of the first image is larger than a second threshold value.
In an alternative embodiment, the determining module 604 is configured to determine the first target image from the set of images based on the target parameter of the first target object in the images and the state information of the first target object in the images by: determining weight values respectively corresponding to the pre-configured target integrity parameter, the target attitude parameter and the target relative position parameter; determining a total parameter value of the first target object in each image according to the target integrity parameter and the corresponding weight value of the first target object in each image, the target attitude parameter and the corresponding weight value of the first target object, and the weight value of the target relative position parameter machine; the first target image is determined based on the total parameter value of the first target object in each image and the state information of the first target object in each image.
In an alternative embodiment, the apparatus is further adapted to: before determining a first target image from the set of images based on target parameters of the first target object in the images and state information of the first target object in the images, the following operations are performed for any one of the set of images to obtain state information of the first target object in the images included in the set of images: determining state information of the first target object in a first image included in the set of images as information for indicating a creation state when the first target object appears for the first time in the first image; determining state information of the first target object as information indicating an update state in a case where the first target object appears in a first image included in the set of images and also appears in a second image included in the set of images, the second image being an image included in the set of images and located in a frame preceding the first image; determining state information of the first target object as information for indicating a lost state in a case where the first target object appears in a first image included in the set of images, does not appear in a second image included in the set of images, which is an image included in the set of images and is located in a frame preceding the first image, and appears in a third image included in the set of images, which is an image included in the second image; in a case where the first target object does not appear in a first image included in the set of images and none of the consecutive predetermined number of frame images included in the first image are present, determining state information of the first target object as information for indicating a deletion state, wherein a last frame image of the consecutive predetermined number of frame images is a previous frame image of the first image.
In an alternative embodiment, the determining module 604 is configured to determine the first target image based on the total parameter value of the first target object in each image and the state information of the first target object in each image, including: determining whether historical record information of the first target object is recorded in a preset binary tree model or not under the condition that state information of the first target object in a currently input image is a creation state or an update state, wherein the historical record information records a historical highest total parameter value of the first target object in the binary tree model; when the history record information is recorded in the binary tree model and the total parameter value of the first target object in the current input image is larger than the highest historical parameter value recorded in the binary tree model by the first target object, updating first history record information included in the history record information based on the record information of the first target object in the current input image, wherein the updated first history record information comprises the total parameter value of the first target object in the current input image, the occurrence times of the first target object in a group of images, the reference base number and the index value of the current input image; the history record information is recorded in the binary tree model, and when the total parameter value of the first target object in the current input image is smaller than or equal to the highest history total parameter value, second history record information contained in the history record information is updated based on the record information of the first target object in the current input image, and the updated second history record information comprises the occurrence times of the first target object in a group of images, the reference base number and the index value of the current input image; determining the record information of the current input image as the history information of the first target object in the binary tree model under the condition that the history information is not recorded in the binary tree model; under the condition that the state information of the first target object in the current input image is in a lost state, updating third historical record information included in the historical record information based on the record information of the first target object in the current input image, wherein the updated third historical record information comprises the occurrence times of the first target object in a group of images, and the index value of the current input image; determining an image corresponding to a historical highest total parameter value recorded by the binary tree model based on the adjusted historical record information, and determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image; and under the condition that the state information of the first target object in the currently input image is in a deleting state, determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image.
In an alternative embodiment, the apparatus is further adapted to: determining state continuous information of the first target object, determining an image corresponding to a historical highest total parameter value recorded by the binary tree model as the first target image under the condition that the state continuous information is in a motion state, and subtracting 1 from a reference base of the first target image; subtracting 1 from the reference base of the first target image in the case that the state continuation information is in a still state; and deleting the first target image under the condition that the reference base of the first target image is 0.
According to a further aspect of the embodiments of the present invention there is also provided an electronic device for implementing the above-described method of determining an image, as shown in fig. 7, the electronic device comprising a memory 702 and a processor 704, the memory 702 having stored therein a computer program, the processor 704 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target attitude parameters and target relative position parameters of the first target object;
S2, determining a first target image from a group of images based on target parameters of the first target object in each image and state information of the first target object in each image, wherein the state information is used for indicating the existence state and state duration information of the first target object.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 7 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile internet device (Mobile INTERNET DEVICES, MID), a PAD, etc. Fig. 7 is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
The memory 702 may be used to store software programs and modules, such as program instructions/modules corresponding to the image determining method and apparatus in the embodiment of the present invention, and the processor 704 executes the software programs and modules stored in the memory 702, thereby performing various functional applications and data processing, that is, implementing the image determining method described above. The memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 702 may further include memory remotely located relative to the processor 704, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 702 may be used to store, but not limited to, the target image and the state information, the target parameters, and the like of the target image. As an example, as shown in fig. 7, the memory 702 may include, but is not limited to, an acquisition module 602 and a determination module 604 in the determination device including the image. In addition, other module units in the above-mentioned image determining apparatus may be included, but are not limited thereto, and are not described in detail in this example.
Optionally, the transmission device 706 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission device 706 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 706 is a Radio Frequency (RF) module that is configured to communicate wirelessly with the internet.
In addition, the electronic device further includes: a display 708 for displaying the target image; and a connection bus 710 for connecting the respective module parts in the above-described electronic device.
According to a further aspect of embodiments of the present invention, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
S1, acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target attitude parameters and target relative position parameters of the first target object;
S2, determining a first target image from a group of images based on target parameters of the first target object in each image and state information of the first target object in each image, wherein the state information is used for indicating the existence state and state duration information of the first target object.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-only memory (ROM), random-access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (11)

1. A method of determining an image, comprising:
Acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target attitude parameters and target relative position parameters of the first target object;
Determining a first target image from the group of images based on target parameters of the first target object in the images and state information of the first target object in the images, wherein the state information is used for indicating the existence state and state duration information of the first target object;
The acquiring the target integrity parameters of the first target object in each image included in the set of images includes: for any one of the set of images, performing the following operations to obtain target integrity parameters of the first target object in each of the images included in the set of images: acquiring coordinate information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target integrity parameter of the first target object in the first image based on the coordinate information and an overlap state, wherein the overlap state is used for indicating whether the first target object and other objects are overlapped in the first image when the first image comprises the other objects; determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is not present, in the case that the first target object is not included in the first image;
The acquiring target attitude parameters of the first target object in each image included in the set of images includes: for any one of the set of images, performing the following operations to obtain target pose parameters of the first target object in each image included in the set of images: acquiring motion trail coordinates of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target attitude parameter of the first target object in the first image based on the motion trail coordinates; the motion trail coordinates are differences between coordinates of the first target object in the first image and coordinates of the first target object in a second image, wherein the second image is an image which is included in the group of images and is located in a frame before the first image;
The acquiring the target relative position parameter of the first target object in each image included in the group of images includes: for any one of the set of images, performing the following operations to obtain target relative position parameters of the first target object in each of the images included in the set of images: acquiring relative position information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target relative position parameter of the first target object in the first image based on the relative position information; wherein the relative position information is a distance of the first target object in the first image from a lower edge of the first image.
2. The method of claim 1, wherein determining a target integrity parameter for the first target object in the first image based on the coordinate information and a state of overlap of the first image with other images comprises:
determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is in an incomplete state, in the case that the first target object is located at an image boundary of the first image based on the coordinate information;
Determining that a target integrity parameter of the first target object in the first image is a parameter for indicating that the first target object is in an incomplete state when the first target object is located inside an image of the first image based on the coordinate information and the first target object overlaps with the other objects existing;
determining that a target integrity parameter of the first target object in the first image is a parameter for indicating that the first target is in a complete state when the first target object is located inside the image based on the coordinate information and no other object exists or the first target object and the existing other object do not overlap;
the image boundary is a pre-divided area which is positioned at the edge of the first image and cannot completely display the first target object, and the image interior is an area which is included in the first image and is except the image boundary.
3. The method of claim 1, wherein determining a target pose parameter of the first target object in the first image based on the motion profile coordinates comprises:
Determining a target attitude parameter of the first target object in the first image as a parameter for indicating that the first target object is in a side state under the condition that the ordinate of the motion trail coordinate is 0;
determining a target attitude parameter of the first target object in the first image as a parameter for indicating that the first target object is in a front state under the condition that the abscissa of the motion trail coordinate is 0 and the ordinate is less than or equal to 0;
and under the condition that the abscissa of the motion trail coordinate is 0 and the ordinate is greater than 0, determining the target posture parameter of the first target object in the first image as a parameter for indicating that the first target object is in a back state.
4. The method of claim 1, wherein determining a target relative position parameter of the first target object in the first image based on the relative position information comprises:
Determining a target relative position parameter of the first target object in the first image as a parameter for indicating that the first target object is in a lower edge state under the condition that the relative position information is determined to be used for indicating that the first target object is in a lower edge state in the first image and the lower edge of the first image is smaller than or equal to a first threshold value;
Determining a target relative position parameter of the first target object in the first image as a parameter for indicating that the first target object is in a non-edge area state under the condition that the relative position information is determined to be used for indicating that the distance between the first target object in the first image and the lower edge of the first image is larger than a first threshold value and smaller than or equal to a second threshold value;
And determining that the target relative position parameter of the first target object in the first image is a parameter for indicating that the first target object is in an upper edge state under the condition that the relative position information is determined to be used for indicating that the distance between the first target object in the first image and the lower edge of the first image is larger than a second threshold value.
5. The method of claim 1, wherein determining a first target image from the set of images based on target parameters of the first target object in the images and state information of the first target object in the images comprises:
determining weight values respectively corresponding to the pre-configured target integrity parameter, the target attitude parameter and the target relative position parameter;
determining a total parameter value of the first target object in each image according to the target integrity parameter and the corresponding weight value of the first target object in each image, the target attitude parameter and the corresponding weight value of the first target object, and the weight value of the target relative position parameter machine;
The first target image is determined based on the total parameter value of the first target object in each image and the state information of the first target object in each image.
6. The method of claim 5, wherein prior to determining a first target image from the set of images based on the target parameters of the first target object in the images and the state information of the first target object in the images, the method further comprises:
For any one of the set of images, performing the following operations to obtain status information of the first target object in each image included in the set of images:
determining state information of the first target object in a first image included in the set of images as information for indicating a creation state when the first target object appears for the first time in the first image;
Determining state information of the first target object as information indicating an update state in a case where the first target object appears in a first image included in the set of images and also appears in a second image included in the set of images, the second image being an image included in the set of images and located in a frame preceding the first image;
Determining state information of the first target object as information for indicating a lost state in a case where the first target object appears in a first image included in the set of images, does not appear in a second image included in the set of images, which is an image included in the set of images and is located in a frame preceding the first image, and appears in a third image included in the set of images, which is an image included in the second image;
In a case where the first target object does not appear in a first image included in the set of images and none of the consecutive predetermined number of frame images included in the first image are present, determining state information of the first target object as information for indicating a deletion state, wherein a last frame image of the consecutive predetermined number of frame images is a previous frame image of the first image.
7. The method of claim 6, wherein determining the first target image based on the total parameter value of the first target object in each image and the status information of the first target object in each image comprises:
determining whether historical record information of the first target object is recorded in a preset binary tree model or not under the condition that state information of the first target object in a currently input image is a creation state or an update state, wherein the historical record information records a historical highest total parameter value of the first target object in the binary tree model;
When the history record information is recorded in the binary tree model and the total parameter value of the first target object in the current input image is larger than the highest historical parameter value recorded in the binary tree model by the first target object, updating first history record information included in the history record information based on the record information of the first target object in the current input image, wherein the updated first history record information comprises the total parameter value of the first target object in the current input image, the occurrence times of the first target object in a group of images, the reference base number and the index value of the current input image;
The history record information is recorded in the binary tree model, and when the total parameter value of the first target object in the current input image is smaller than or equal to the highest history total parameter value, second history record information contained in the history record information is updated based on the record information of the first target object in the current input image, and the updated second history record information comprises the occurrence times of the first target object in a group of images, the reference base number and the index value of the current input image;
Determining the record information of the current input image as the history information of the first target object in the binary tree model under the condition that the history information is not recorded in the binary tree model;
Under the condition that the state information of the first target object in the current input image is in a lost state, updating third historical record information included in the historical record information based on the record information of the first target object in the current input image, wherein the updated third historical record information comprises the occurrence times of the first target object in a group of images, and the index value of the current input image;
determining an image corresponding to a historical highest total parameter value recorded by the binary tree model based on the adjusted historical record information, and determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image;
and under the condition that the state information of the first target object in the currently input image is in a deleting state, determining the image corresponding to the historical highest total parameter value recorded by the binary tree model as the first target image.
8. The method of claim 7, wherein the method further comprises:
Determining state continuous information of the first target object, determining an image corresponding to a historical highest total parameter value recorded by the binary tree model as the first target image under the condition that the state continuous information is in a motion state, and subtracting 1 from a reference base of the first target image;
Subtracting 1 from the reference base of the first target image in the case that the state continuation information is in a still state;
and deleting the first target image under the condition that the reference base of the first target image is 0.
9. An image determining apparatus, comprising:
The system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring target parameters of a first target object in each image included in a group of images, wherein the target parameters comprise target integrity parameters, target posture parameters and target relative position parameters of the first target object;
A determining module, configured to determine a first target image from the set of images based on a target parameter of the first target object in each image and state information of the first target object in each image, where the state information is used to indicate a presence state and state duration information of the first target object;
The apparatus is configured to obtain target integrity parameters of a first target object in each image included in a set of images by: for any one of the set of images, performing the following operations to obtain target integrity parameters of the first target object in each of the images included in the set of images: acquiring coordinate information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target integrity parameter of the first target object in the first image based on the coordinate information and an overlap state, wherein the overlap state is used for indicating whether the first target object and other objects are overlapped in the first image when the first image comprises the other objects; determining a target integrity parameter of the first target object in the first image as a parameter for indicating that the first target object is not present, in the case that the first target object is not included in the first image;
the apparatus is configured to acquire target pose parameters of a first target object in each image included in a set of images by: for any one of the set of images, performing the following operations to obtain target pose parameters of the first target object in each image included in the set of images: acquiring motion trail coordinates of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target attitude parameter of the first target object in the first image based on the motion trail coordinates; the motion trail coordinates are differences between coordinates of the first target object in the first image and coordinates of the first target object in a second image, wherein the second image is an image which is included in the group of images and is located in a frame before the first image;
The apparatus is configured to obtain target relative position parameters of a first target object in each image included in a set of images by: for any one of the set of images, performing the following operations to obtain target relative position parameters of the first target object in each of the images included in the set of images: acquiring relative position information of a first target object in a first image included in the group of images under the condition that the first target object is included in the first image; determining a target relative position parameter of the first target object in the first image based on the relative position information; wherein the relative position information is a distance of the first target object in the first image from a lower edge of the first image.
10. A computer readable storage medium comprising a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to claims 1-8 by means of the computer program.
CN202010575238.2A 2020-06-22 2020-06-22 Image determining method and device, storage medium and electronic device Active CN111738152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010575238.2A CN111738152B (en) 2020-06-22 2020-06-22 Image determining method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010575238.2A CN111738152B (en) 2020-06-22 2020-06-22 Image determining method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111738152A CN111738152A (en) 2020-10-02
CN111738152B true CN111738152B (en) 2024-04-19

Family

ID=72652010

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010575238.2A Active CN111738152B (en) 2020-06-22 2020-06-22 Image determining method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111738152B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013079098A1 (en) * 2011-11-29 2013-06-06 Layar B.V. Dynamically configuring an image processing function
CN110879995A (en) * 2019-12-02 2020-03-13 上海秒针网络科技有限公司 Target object detection method and device, storage medium and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013079098A1 (en) * 2011-11-29 2013-06-06 Layar B.V. Dynamically configuring an image processing function
CN110879995A (en) * 2019-12-02 2020-03-13 上海秒针网络科技有限公司 Target object detection method and device, storage medium and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多路图像融合的目标跟踪系统设计;梁兴建;雷文;陈超;;四川理工学院学报(自然科学版)(06);全文 *

Also Published As

Publication number Publication date
CN111738152A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN110443210B (en) Pedestrian tracking method and device and terminal
CN108656107B (en) Mechanical arm grabbing system and method based on image processing
CN109410316B (en) Method for three-dimensional reconstruction of object, tracking method, related device and storage medium
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
CN110610169B (en) Picture marking method and device, storage medium and electronic device
CN108875667B (en) Target identification method and device, terminal equipment and storage medium
JP2019036167A (en) Image processing apparatus and image processing method
CN110162454B (en) Game running method and device, storage medium and electronic device
CN111124888B (en) Method and device for generating recording script and electronic device
CN110149553A (en) Treating method and apparatus, storage medium and the electronic device of image
CN111428660A (en) Video editing method and device, storage medium and electronic device
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN113160231A (en) Sample generation method, sample generation device and electronic equipment
CN111821693A (en) Perspective plug-in detection method, device, equipment and storage medium for game
CN111368860B (en) Repositioning method and terminal equipment
CN111738152B (en) Image determining method and device, storage medium and electronic device
CN113221819A (en) Detection method and device for package violent sorting, computer equipment and storage medium
CN110414322B (en) Method, device, equipment and storage medium for extracting picture
CN109919164B (en) User interface object identification method and device
US11551379B2 (en) Learning template representation libraries
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN114187180A (en) Picture splicing method and device
CN108898134B (en) Number identification method and device, terminal equipment and storage medium
CN112672033A (en) Image processing method and device, storage medium and electronic device
CN113724176A (en) Multi-camera motion capture seamless connection method, device, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant