CN112967228B - Determination method and device of target optical flow information, electronic equipment and storage medium - Google Patents
Determination method and device of target optical flow information, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112967228B CN112967228B CN202110146441.2A CN202110146441A CN112967228B CN 112967228 B CN112967228 B CN 112967228B CN 202110146441 A CN202110146441 A CN 202110146441A CN 112967228 B CN112967228 B CN 112967228B
- Authority
- CN
- China
- Prior art keywords
- information
- determining
- optical flow
- information set
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 161
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013507 mapping Methods 0.000 claims abstract description 77
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method, a device, electronic equipment and a storage medium for determining target optical flow information, wherein the method comprises the steps of respectively determining an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images, determining a position information set corresponding to an object in the target image according to the optical flow information set and the camera pose information set, determining a first mapping information set of the object in the two adjacent frames of images according to the camera pose information set and the position information set, determining a second mapping information set of the object in the target image according to the position information set, and determining target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set. The embodiment of the invention can improve the accuracy of determining the position information of the moving object and can also improve the accuracy of determining the moving state of the object.
Description
Technical Field
The present invention relates to the field of computer vision, and in particular, to a method and apparatus for determining optical flow information of a target, an electronic device, and a storage medium.
Background
In the field of computer vision, an optical flow field can be used for describing the position change relation of each pixel point between two frames of images, namely describing the spatial motion information of an object. In the existing scheme for determining the compensation optical flow information, the moving object needs to be estimated under the situation that the camera is in motion shooting, that is, the scene motion described by the optical flow field couples the camera motion and the object motion, so that the optical flow field needs to be compensated.
The current scheme for compensating the optical flow field is based on a rotation compensation method provided by Bideau, in the method, the corresponding optical flow information can be calculated according to the rotation parameters of the camera on the assumption that the rotation parameters of the camera are known, and then the compensation optical flow information is determined based on the real optical flow information and the optical flow information caused by rotation. The compensation optical flow information determined based on the method is not accurate enough, the position information of the moving object cannot be accurately estimated according to the compensation optical flow information, and the moving state of the object cannot be accurately determined.
Disclosure of Invention
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for determining target optical flow information, which can acquire accurate target optical flow information so as to improve the accuracy of determining the position information of a moving object and improve the accuracy of determining the moving state of the object.
The embodiment of the invention provides a method for determining target optical flow information, which comprises the following steps:
respectively determining an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images;
Determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set;
determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set;
determining a second mapping information set of the object in the target image according to the position information set;
And determining target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
Further, determining an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and two adjacent frames of images respectively, includes:
determining a target image, a first image and a second image from the multi-frame image; the first image and the second image are adjacent to the target image respectively;
determining first pixel information of an object from a target image, determining second pixel information of the object from the first image, and determining third pixel information of the object from the second image;
determining first optical flow information according to the first pixel information and the second pixel information;
Determining second optical flow information according to the first pixel information and the third pixel information;
taking the first optical flow information and the second optical flow information as an optical flow information set;
Determining first camera pose information corresponding to a target image and a first image;
determining second camera pose information corresponding to the target image and the second image;
the first camera pose information and the second camera pose information are used as a camera pose information set.
Further, determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set, including:
triangularizing the first pixel information, the second pixel information and the first camera pose information to determine first position information corresponding to the object;
triangularizing the first pixel information, the third pixel information and the second camera pose information to determine second position information corresponding to the object;
The first location information and the second location information are taken as a location information set.
Further, determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set, including:
Determining mapping pixel information of the object in the first image according to the second position information and the first camera pose information;
determining mapping pixel information of the object in the second image according to the first position information and the second camera pose information;
The mapping pixel information of the object in the first image and the mapping pixel information of the object in the second image are used as a first mapping information set.
Further, determining a second set of mapping information for the object in the target image based on the set of location information, comprising:
Determining first mapping pixel information of the object in the target image according to the first position information;
Determining second mapping pixel information of the object in the target image according to the second position information;
the first mapped pixel information and the second mapped pixel information are used as a second mapped information set.
Further, determining the target optical flow information from the optical flow information set, the first mapping information set, and the second mapping information set, includes:
determining first compensation information according to the optical flow information set and the first mapping information set;
Determining second compensation information according to the optical flow information set and the mapping information set;
The first compensation information and the second compensation information are set as target optical flow information.
Correspondingly, the embodiment of the invention also provides a device for determining the target optical flow information, which comprises the following steps:
the optical flow information set and camera pose information set determining module is used for respectively determining an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images;
The position information set determining module is used for determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set;
The first mapping information set determining module is used for determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set;
The second mapping information set determining module is used for determining a second mapping information set of the object in the target image according to the position information set;
And the target optical flow information determining module is used for determining target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
Further, the location information set determining module includes:
the first position information determining module is used for performing triangularization processing on the first pixel information, the second pixel information and the first camera pose information to determine first position information corresponding to the object;
and the second position information determining module is used for performing triangulation processing on the first pixel information, the third pixel information and the second camera pose information and determining second position information corresponding to the object.
Correspondingly, the embodiment of the invention also provides electronic equipment, which comprises a processor and a memory, wherein at least one instruction, at least one section of program, code set or instruction set is stored in the memory, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by the processor to realize the method for determining the target optical flow information.
Accordingly, an embodiment of the present invention further provides a computer readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored, where at least one instruction, at least one program, a code set, or an instruction set is loaded and executed by a processor to implement the method for determining the target optical flow information described above.
The embodiment of the invention has the following beneficial effects:
According to the embodiment of the invention, the target optical flow information is determined according to the optical flow information corresponding to the multi-frame images and the camera pose information, so that the target optical flow information can be more accurate, the optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, the accuracy of determining the position information of the moving object can be improved, and the accuracy of determining the moving state of the object can be improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for determining optical flow information of a target according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for determining optical flow information of a target according to an embodiment of the present invention;
FIG. 3 is a flow chart of determining an optical flow information set and a camera pose information set provided by an embodiment of the present invention;
FIG. 4 is a flowchart of determining a set of location information corresponding to an object in a target image according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a device for determining target optical flow information according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail with reference to the accompanying drawings. It will be apparent that the described embodiments are merely one embodiment of the invention, and not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. In the description of embodiments of the present invention, it should be understood that the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or a third "may include one or more of the feature, either explicitly or implicitly. Moreover, the terms "first," "second," "third," and the like, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or modules is not necessarily limited to those steps or modules that are expressly listed or inherent to such process, method, apparatus, article, or device.
The present specification provides method operational steps as illustrated by an example or flowchart, but may include more or fewer operational steps based on conventional or non-inventive labor. The sequence of steps recited in the embodiments is only one manner of a plurality of execution sequences, and does not represent a unique execution sequence, and when actually executed, may be executed sequentially or in parallel (e.g., in a parallel processor or a multithreaded environment) according to the method shown in the embodiments or the drawings.
Next, a specific embodiment of a method for determining optical flow information of a target according to the present invention is described, fig. 1 is a flowchart of a method for determining optical flow information of a target according to an embodiment of the present invention, and fig. 2 is a schematic diagram of a method for determining optical flow information of a target according to an embodiment of the present invention. As shown in fig. 1 and 2, the method includes:
S101: and respectively determining an optical flow information set between the target image and the two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images.
In the embodiment of the present specification, the adjacent two-frame image may refer to two-frame images adjacent to the target image.
In an alternative embodiment, a plurality of objects and position data corresponding to each object may be determined from the target image, and the target corresponding to each object and the position data corresponding to each target may be respectively determined from two frames of images adjacent to the target image, where the average value of the position data corresponding to the object and the position data corresponding to the target is determined to be optical flow information between the target image and the adjacent image. For example, when the image captured by the camera contains a plurality of trees and a plurality of traveling vehicles, the traveling vehicles may be determined as objects from the target image, and then the same vehicle may be determined as a target corresponding to the objects in the target image from two frames of images adjacent to the target image.
For example, the position data a 1、a2 and a 3 of a plurality of objects are determined from the target image a, the position data B 1、b2 and B 3 of the objects are determined from the image B adjacent to the target image, and the position data C 1、c2 and C 3 of the objects are determined from the image C adjacent to the target image, and thus the optical flow information f 1 and f 2 between the target image and the adjacent two frame images can be determined by the following formulas, respectively:
f1=[(b1-a1)+(b2-a2)+(b3-a3)]/3
f2=[(c1-a1)+(c2-a2)+(c3-a3)]/3
In another alternative embodiment, an optical flow information set between the target image and two adjacent frame images and a camera pose information set corresponding to the target image and two adjacent frame images may be respectively determined, where the two adjacent frame images may refer to two adjacent frame images.
FIG. 3 is a flowchart of determining an optical flow information set and a camera pose information set according to an embodiment of the present invention, specifically as shown in FIGS. 2 and 3, the method includes:
s301: determining a target image, a first image and a second image from the multi-frame image; the first image and the second image are adjacent to the target image, respectively.
In the embodiment of the invention, three continuous images can be determined from an image sequence containing multiple frames of images, and each frame of image is sequentially determined to be a first image, a target image and a second image according to the sequence of each frame of image; or optionally determining one frame of image from the multi-frame images as a target image, determining the previous frame of image of the target image as a first image, and determining the next frame of image of the target image as a second image. The first image and the second image are the above two adjacent frames of images.
In the embodiment of the invention, the first image may be a t-1 frame image, the target image may be a t frame image, the second image may be a t+1 frame image, and the object may be an object point.
S303: first pixel information of the object is determined from the target image, second pixel information of the object is determined from the first image, and third pixel information of the object is determined from the second image.
In the embodiment of the invention, the first pixel information of the object can be determined from the target image, the second pixel information of the object can be determined from the first image, and the third pixel information of the object can be determined from the second image. Specifically, as shown in fig. 2, the first pixel information P t of the object point X t may be determined in the target image captured at the position C t, that is, the t-th frame image, the second pixel information P t-1 of the object point X t-1 may be determined in the first image captured at the position C t-1, that is, the t-1-th frame image, and the third pixel information P t+1 of the object point X t+1 may be determined in the second image captured at the position C t+1, that is, the t+1-th frame image. Note that, the object point X t, the object point X t-1, and the object point X t+1 are positions corresponding to the same object point at different times.
S305: first optical flow information is determined from the first pixel information and the second pixel information.
In the embodiment of the present invention, the first optical flow information f 1 may be determined according to the first pixel information P t of the object in the target image and the second pixel information P t-1 of the object in the first image. I.e. the first optical flow information describes the mapping of the object point between the corresponding pixel in the t-1 frame image to the corresponding pixel in the t frame image. Specifically, the first optical flow information f 1 may be determined using the following formula:
f1=Pt-Pt-1
s307: second optical flow information is determined from the root first pixel information and the third pixel information.
In the embodiment of the present invention, the second optical flow information f 2 may be determined according to the first pixel information P t of the object in the target image and the third pixel information P t+1 of the object in the second image. That is, the second optical flow information describes a mapping relationship between the corresponding pixel of the object point in the t-th frame image to the corresponding pixel in the t+1st frame image. Specifically, the second optical flow information f 2 may be determined using the following formula:
f2=Pt+1-Pt
s309: the first optical flow information and the second optical flow information are taken as an optical flow information set.
In the embodiment of the present invention, the first optical flow information and the second optical flow information described above may be used as the optical flow information set.
S311: first camera pose information of the target image corresponding to the first image is determined.
In the embodiment of the invention, the first camera pose information T 1 may be determined according to the position C t of the camera when the target image is shot and the position C t-1 of the camera when the first image is shot, that is, the first camera pose information T 1 may be a vector with the start point of the position of the camera at the time T and the position of the camera at the time T-1 as the end point.
S313: and determining second camera pose information of the target image corresponding to the second image.
In the embodiment of the present invention, the second camera pose information T 2 may be determined according to the position C t of the camera when the target image is captured and the position C t+1 of the camera when the second image is captured, that is, the second camera pose information T 2 may be a vector with the start point of the position of the camera at the time T and the position of the camera at the time t+1 as the end point.
S315: the first camera pose information and the second camera pose information are used as a camera pose information set.
In the embodiment of the invention, the first camera pose information and the second camera pose information described above can be used as a camera pose information set.
S103: and determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set.
In the embodiment of the invention, the position information set corresponding to the object in the target image can be determined according to the first optical flow information, the second optical flow information, the first camera pose information and the second camera pose information. Fig. 4 is a flowchart of determining a location information set corresponding to an object in a target image according to an embodiment of the present invention, and specifically shown in fig. 2 and fig. 4.
S401: and performing triangularization processing on the first pixel information, the second pixel information and the first camera pose information to determine first position information corresponding to the object.
In the embodiment of the invention, the first position information P 't+1 corresponding to the object can be determined according to the first pixel information P t of the object in the target image, the second pixel information P t-1 of the object in the first image and the first camera pose information T 1 corresponding to the first image of the target image, that is, the first position information P' t+1 corresponding to the object point is obtained by performing triangulation processing on the first pixel information P t, the second pixel information P t-1 and the first camera pose information T 1.
S403: and performing triangularization processing on the first pixel information, the third pixel information and the second camera pose information, and determining second position information corresponding to the object.
In the embodiment of the invention, the second position information P 't-1 corresponding to the object can be determined according to the first pixel information P t of the object in the target image, the third pixel information P t+1 of the object in the second image and the second camera pose information T 2 corresponding to the target image and the second image, that is, the first pixel information P t, the third pixel information P t+1 and the second camera pose information T 2 are triangulated, so as to obtain the second position information P' t-1 corresponding to the object point.
S405: the first location information and the second location information are taken as a location information set.
In the embodiment of the present invention, the first location information and the second location information described above may be used as the location information set.
S105: and determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set.
In the embodiment of the present invention, the mapping pixel information P 1 of the object in the first image may be determined according to the second position information P 't-1 and the first camera pose information T 1, and the mapping pixel information P 2 of the object in the second image may be determined according to the first position information P' t+1 and the second camera pose information T 2, so that the mapping pixel information P 1 of the object in the first image and the mapping pixel information P 2 of the object in the second image are used as the first mapping information set.
Specifically, by using the position change data of the camera from the time T to the time T-1, the first pixel information P t, the third pixel information P t+1 and the second camera pose information T 2 may be triangulated, and the obtained spatial position data corresponding to the object point may be converted into a camera coordinate system corresponding to the T-1 frame image, and projected, to determine the mapped pixel information P 1 of the second position information P' t-1 corresponding to the object point in the T-1 frame image. Meanwhile, by using the position change data of the camera from the time T to the time t+1, the first pixel information P t, the second pixel information P t-1 and the first camera pose information T 1 can be subjected to triangulation processing, so that the spatial position data corresponding to the object point is converted into a camera coordinate system corresponding to the t+1 frame image, and is projected, and the mapping pixel information P 2 of the first position information P' t+1 corresponding to the object point in the t+1 frame image is determined.
S107: a second set of mapping information for the object in the target image is determined based on the set of location information.
In the embodiment of the present invention, the mapping pixel information P 3 of the object in the target image may be determined according to the first position information P 't+1, and the mapping pixel information P 4 of the object in the target image may be determined according to the second position information P' t-1, so that the mapping pixel information P 3 of the object in the target image and the mapping pixel information P 4 of the object in the target image are used as the second mapping information set.
Specifically, the first pixel information P t, the third pixel information P t+1, and the second camera pose information T 2 may be subjected to triangulation, and the obtained spatial position data corresponding to the object point may be projected onto a camera coordinate system corresponding to the T frame image, to determine mapped pixel information P 3 of the second position information P' t-1 corresponding to the object point in the T frame image. Meanwhile, the first pixel information P t, the second pixel information P t-1 and the first camera pose information T 1 may be subjected to triangulation, and the obtained spatial position data corresponding to the object point may be projected to a camera coordinate system corresponding to the T frame image, so as to determine the mapped pixel information P 4 of the first position information P' t+1 corresponding to the object point in the T frame image.
S109: and determining target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
In the embodiment of the invention, the first compensation information can be determined according to the optical flow information set and the first mapping information set, the second compensation information can be determined according to the optical flow information set and the mapping information set, and then the first compensation information and the second compensation information can be used as target optical flow information.
In a specific embodiment, the backward compensation optical flow field r 1 may be determined according to the mapped pixel information P 1 of the object in the first image and the second pixel information P t-1 of the object in the first image, and the forward compensation optical flow field r 2 may be determined according to the mapped pixel information P 2 of the object in the second image and the third pixel information P t+1 of the object in the second image. The backward compensation optical flow field r 1 and the forward compensation optical flow field r 2 can be further used as first compensation information. The backward compensation optical flow field r 1 and the forward compensation optical flow field r 2 can be determined by adopting the following formulas:
r1=P1-Pt-1
r2=P2-Pt+1
In a specific embodiment, the first projection residual r 3 may be determined according to the first pixel information P t of the object in the target image and the mapped pixel information P 3 of the object in the target image, and the second projection residual r 4 may be determined according to the first pixel information P t of the object in the target image and the mapped pixel information P 4 of the object in the target image. The first projection residual r 3 and the second projection residual r 4 may be used as second compensation information. Specifically, the following formula may be used to determine the first projection residual r 3 and the second projection residual r 4:
r3=P3-Pt
r4=P4-Pt
in the embodiment of the present invention, the method for determining the target optical flow information described above may be implemented based on a target optical flow information determining model, and specifically, an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and two adjacent frames of images may be used as inputs of the target optical flow information determining model, where the target optical flow information determining model may output compensation optical flow information, that is, a backward compensation optical flow field r 1, a forward compensation optical flow field r 2, a first projection residual r 3, and a second projection residual r 4. The method steps described above may be implemented in the target optical flow information determination model, and are not described here again.
By adopting the method for determining the target optical flow information provided by the embodiment of the invention, the target optical flow information is determined according to the optical flow information corresponding to the multi-frame images and the camera pose information. The optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, the accuracy of determining the position information of the moving object can be improved, and the accuracy of determining the moving state of the object can also be improved.
The embodiment of the invention also provides a device for determining the target optical flow information, and fig. 5 is a schematic structural diagram of the device for determining the target optical flow information, as shown in fig. 5, where the device includes:
the optical flow information set and camera pose information set determining module 501 is configured to determine an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and two adjacent frames of images, respectively;
A position information set determining module 503, configured to determine a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set;
a first mapping information set determining module 505, configured to determine a first mapping information set of the object in two adjacent frames of images according to the pose information set and the position information set of the camera;
A second mapping information set determining module 507, configured to determine a second mapping information set of the object in the target image according to the location information set;
The target optical flow information determining module 509 is configured to determine target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
The device for determining the target optical flow information provided by the embodiment of the invention can determine the target optical flow information according to the optical flow information corresponding to the multi-frame images and the camera pose information, so that the target optical flow information can be more accurate, the optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, the accuracy of determining the position information of the moving object can be improved, and the accuracy of determining the moving state of the object can be improved.
In the embodiment of the present invention, the location information set determining module 503 includes:
the first position information determining module is used for performing triangularization processing on the first pixel information, the second pixel information and the first camera pose information to determine first position information corresponding to the object;
and the second position information determining module is used for performing triangulation processing on the first pixel information, the third pixel information and the second camera pose information and determining second position information corresponding to the object.
The apparatus and method embodiments in the embodiments of the present invention are based on the same inventive concept.
The electronic device according to the embodiment of the present invention may be disposed in a server to store at least one instruction, at least one program, a code set, or an instruction set related to a method for implementing the determination method of the target optical flow information in the method embodiment, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the memory to implement the determination method of the target optical flow information.
The storage medium may be disposed in a server to store at least one instruction, at least one program, a code set, or an instruction set related to a method for implementing the determination method of the target optical flow information in the method embodiment, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the determination method of the target optical flow information.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to, including: a U-disk, a Read-only Memory (ROM), a removable hard disk, a magnetic disk, or an optical disk, or the like, which can store program codes.
The embodiments of the method, the device, the electronic device or the storage medium for determining the target optical flow information provided by the embodiment of the invention can determine the target optical flow information according to the optical flow information corresponding to the multi-frame images and the camera pose information, so that the target optical flow information can be more accurate, the optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, the accuracy of determining the position information of the moving object can be improved, and the accuracy of determining the moving state of the object can also be improved.
It should be noted that: the order in which the embodiments of the invention are presented is intended to be illustrative only and is not intended to limit the invention to the particular embodiments disclosed, and other embodiments are within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in a different order in a different embodiment and can achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or the sequential order shown, to achieve desirable results, and in some embodiments, multitasking parallel processing may be possible or advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the embodiments of the device, the description is relatively simple, since it is based on embodiments similar to the method, as relevant see the description of parts of the method embodiments.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.
Claims (10)
1. A method for determining optical flow information of a target, comprising:
Respectively determining an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images; the two adjacent frame images are two adjacent frame images of the target image;
Wherein, the determining the optical flow information set between the target image and the two adjacent frames of images includes: determining a plurality of objects and position data corresponding to each object from the object image, respectively determining a target corresponding to each object and position data corresponding to each target from two frames of images adjacent to the object image, and determining the optical flow information set between the object image and the adjacent two frames of images according to the average value of the position data corresponding to the object and the position data corresponding to the target;
Determining a position information set corresponding to an object in the target image according to the optical flow information set and the camera pose information set;
Determining a first mapping information set of the object in the two adjacent frames of images according to the camera pose information set and the position information set;
determining a second mapping information set of the object in the target image according to the position information set;
And determining target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
2. The method according to claim 1, wherein determining the optical flow information set between the target image and the two adjacent frames of images and the camera pose information set corresponding to the target image and the two adjacent frames of images respectively includes:
Determining a target image, a first image and a second image from the multi-frame image; the first image and the second image are adjacent to the target image respectively;
Determining first pixel information of the object from the target image, determining second pixel information of the object from the first image, and determining third pixel information of the object from the second image;
determining first optical flow information according to the first pixel information and the second pixel information;
determining second optical flow information according to the first pixel information and the third pixel information;
taking the first optical flow information and the second optical flow information as the optical flow information set;
determining first camera pose information corresponding to the target image and the first image;
Determining second camera pose information corresponding to the target image and the second image;
and taking the first camera pose information and the second camera pose information as the camera pose information set.
3. The method of claim 2, wherein the determining a set of location information corresponding to the object in the target image from the set of optical flow information and the set of camera pose information comprises:
triangularizing the first pixel information, the second pixel information and the first camera pose information to determine first position information corresponding to the object;
triangularizing the first pixel information, the third pixel information and the second camera pose information to determine second position information corresponding to the object;
the first location information and the second location information are taken as the location information set.
4. A method according to claim 3, wherein said determining a first set of mapping information for said object in said two adjacent frames of images from said camera pose information set and said position information set comprises:
Determining mapping pixel information of the object in the first image according to the second position information and the first camera pose information;
Determining mapped pixel information of the object in the second image according to the first position information and the second camera pose information;
And taking the mapping pixel information of the object in the first image and the mapping pixel information of the object in the second image as the first mapping information set.
5. A method according to claim 3, wherein said determining a second set of mapping information for said object in said target image from said set of location information comprises:
Determining first mapping pixel information of the object in the target image according to the first position information;
determining second mapping pixel information of the object in the target image according to the second position information;
And taking the first mapping pixel information and the second mapping pixel information as the second mapping information set.
6. The method of claim 1, wherein the determining the target optical flow information from the optical flow information set, the first mapping information set, and the second mapping information set comprises:
determining first compensation information according to the optical flow information set and the first mapping information set;
Determining second compensation information according to the optical flow information set and the mapping information set;
The first compensation information and the second compensation information are taken as the target optical flow information.
7. A device for determining target optical flow information, comprising:
The system comprises an optical flow information set and camera pose information set determining module, a camera pose information set determining module and a camera pose information set determining module, wherein the optical flow information set between a target image and two adjacent frames of images and the camera pose information set corresponding to the target image and the two adjacent frames of images are respectively determined; the two adjacent frame images are two adjacent frame images of the target image; wherein, the determining the optical flow information set between the target image and the two adjacent frames of images includes: determining a plurality of objects and position data corresponding to each object from the object image, respectively determining a target corresponding to each object and position data corresponding to each target from two frames of images adjacent to the object image, and determining the optical flow information set between the object image and the adjacent two frames of images according to the average value of the position data corresponding to the object and the position data corresponding to the target;
The position information set determining module is used for determining a position information set corresponding to an object in the target image according to the optical flow information set and the camera pose information set;
the first mapping information set determining module is used for determining a first mapping information set of the object in the two adjacent frames of images according to the camera pose information set and the position information set;
A second mapping information set determining module, configured to determine a second mapping information set of the object in the target image according to the location information set;
And the target optical flow information determining module is used for determining target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
8. The apparatus of claim 7, wherein the location information set determination module comprises:
The first position information determining module is used for performing triangularization processing on the first pixel information, the second pixel information and the first camera pose information to determine first position information corresponding to the object;
and the second position information determining module is used for performing triangulation processing on the first pixel information, the third pixel information and the second camera pose information and determining second position information corresponding to the object.
9. An electronic device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the method of determining the target optical flow information of any one of claims 1-6.
10. A computer-readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by a processor to implement the method of determining target optical flow information of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110146441.2A CN112967228B (en) | 2021-02-02 | 2021-02-02 | Determination method and device of target optical flow information, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110146441.2A CN112967228B (en) | 2021-02-02 | 2021-02-02 | Determination method and device of target optical flow information, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112967228A CN112967228A (en) | 2021-06-15 |
CN112967228B true CN112967228B (en) | 2024-04-26 |
Family
ID=76273604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110146441.2A Active CN112967228B (en) | 2021-02-02 | 2021-02-02 | Determination method and device of target optical flow information, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112967228B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228834B (en) * | 2022-12-20 | 2023-11-03 | 阿波罗智联(北京)科技有限公司 | Image depth acquisition method and device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2012250766A1 (en) * | 2011-05-02 | 2013-12-19 | Certusview Technologies, Llc | Marking methods, apparatus and systems including optical flow-based dead reckoning features |
CN108230437A (en) * | 2017-12-15 | 2018-06-29 | 深圳市商汤科技有限公司 | Scene reconstruction method and device, electronic equipment, program and medium |
CN109920055A (en) * | 2019-03-08 | 2019-06-21 | 视辰信息科技(上海)有限公司 | Construction method, device and the electronic equipment of 3D vision map |
CN111127522A (en) * | 2019-12-30 | 2020-05-08 | 亮风台(上海)信息科技有限公司 | Monocular camera-based depth optical flow prediction method, device, equipment and medium |
CN111178277A (en) * | 2019-12-31 | 2020-05-19 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111583716A (en) * | 2020-04-29 | 2020-08-25 | 浙江吉利汽车研究院有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and storage medium |
CN111598927A (en) * | 2020-05-18 | 2020-08-28 | 京东方科技集团股份有限公司 | Positioning reconstruction method and device |
CN111739144A (en) * | 2020-06-19 | 2020-10-02 | 天津大学 | Method and device for simultaneously positioning and mapping based on depth feature optical flow |
CN112116655A (en) * | 2019-06-20 | 2020-12-22 | 北京地平线机器人技术研发有限公司 | Method and device for determining position information of image of target object |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120072035A1 (en) * | 2010-09-17 | 2012-03-22 | Steven Nielsen | Methods and apparatus for dispensing material and electronically tracking same |
US10242581B2 (en) * | 2016-10-11 | 2019-03-26 | Insitu, Inc. | Method and apparatus for target relative guidance |
-
2021
- 2021-02-02 CN CN202110146441.2A patent/CN112967228B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2012250766A1 (en) * | 2011-05-02 | 2013-12-19 | Certusview Technologies, Llc | Marking methods, apparatus and systems including optical flow-based dead reckoning features |
CN108230437A (en) * | 2017-12-15 | 2018-06-29 | 深圳市商汤科技有限公司 | Scene reconstruction method and device, electronic equipment, program and medium |
CN109920055A (en) * | 2019-03-08 | 2019-06-21 | 视辰信息科技(上海)有限公司 | Construction method, device and the electronic equipment of 3D vision map |
CN112116655A (en) * | 2019-06-20 | 2020-12-22 | 北京地平线机器人技术研发有限公司 | Method and device for determining position information of image of target object |
CN111127522A (en) * | 2019-12-30 | 2020-05-08 | 亮风台(上海)信息科技有限公司 | Monocular camera-based depth optical flow prediction method, device, equipment and medium |
CN111178277A (en) * | 2019-12-31 | 2020-05-19 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111583716A (en) * | 2020-04-29 | 2020-08-25 | 浙江吉利汽车研究院有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and storage medium |
CN111598927A (en) * | 2020-05-18 | 2020-08-28 | 京东方科技集团股份有限公司 | Positioning reconstruction method and device |
CN111739144A (en) * | 2020-06-19 | 2020-10-02 | 天津大学 | Method and device for simultaneously positioning and mapping based on depth feature optical flow |
Non-Patent Citations (2)
Title |
---|
"Exploiting Pose Mask Features For Video Action Recognition";Junqing Miao;《2019 IEEE 5th International Conference on Computer and Communications (ICCC)》;全文 * |
"基于姿态和骨架信息的行为识别方法研究与实现";马静;《中国优秀硕士学位论文全文数据库 信息科技辑》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112967228A (en) | 2021-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765498B (en) | Monocular vision tracking, device and storage medium | |
CN107990899B (en) | Positioning method and system based on SLAM | |
CN111354042B (en) | Feature extraction method and device of robot visual image, robot and medium | |
JP3843119B2 (en) | Moving body motion calculation method and apparatus, and navigation system | |
JPWO2018142496A1 (en) | 3D measuring device | |
JP4982410B2 (en) | Space movement amount calculation apparatus and method | |
CN111899282A (en) | Pedestrian trajectory tracking method and device based on binocular camera calibration | |
CN111507132B (en) | Positioning method, device and equipment | |
CN113029128B (en) | Visual navigation method and related device, mobile terminal and storage medium | |
CN113190120B (en) | Pose acquisition method and device, electronic equipment and storage medium | |
CN109635639B (en) | Method, device, equipment and storage medium for detecting position of traffic sign | |
CN109922258A (en) | Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera | |
CN112967228B (en) | Determination method and device of target optical flow information, electronic equipment and storage medium | |
CN109040525A (en) | Image processing method, device, computer-readable medium and electronic equipment | |
CN111179408B (en) | Three-dimensional modeling method and equipment | |
CN113177901A (en) | Multi-frame moving image fusion method and system for robot vision | |
CN110335308B (en) | Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection | |
JP2005141655A (en) | Three-dimensional modeling apparatus and three-dimensional modeling method | |
WO2019186677A1 (en) | Robot position/posture estimation and 3d measurement device | |
CN112862936B (en) | Expression model processing method and device, electronic equipment and storage medium | |
CN114037146A (en) | Queuing waiting time length determining method and device | |
CN112037261A (en) | Method and device for removing dynamic features of image | |
CN111369592A (en) | Rapid global motion estimation method based on Newton interpolation | |
CN115880338B (en) | Labeling method, labeling device and computer readable storage medium | |
CN117201705B (en) | Panoramic image acquisition method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |