CN112967228A - Method and device for determining target optical flow information, electronic equipment and storage medium - Google Patents
Method and device for determining target optical flow information, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112967228A CN112967228A CN202110146441.2A CN202110146441A CN112967228A CN 112967228 A CN112967228 A CN 112967228A CN 202110146441 A CN202110146441 A CN 202110146441A CN 112967228 A CN112967228 A CN 112967228A
- Authority
- CN
- China
- Prior art keywords
- information
- determining
- optical flow
- mapping
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 150
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013507 mapping Methods 0.000 claims abstract description 76
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
Abstract
The invention relates to a method, a device, an electronic device and a storage medium for determining target optical flow information, wherein the method comprises the steps of respectively determining an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images, determining a position information set corresponding to an object in the target image according to the optical flow information set and the camera pose information set, determining a first mapping information set of the object in the two adjacent frames of images according to the camera pose information set and the position information set, determining a second mapping information set of the object in the target image according to the position information set, and determining the target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set. The method and the device can improve the accuracy of determining the position information of the moving object and can also improve the accuracy of determining the motion state of the object.
Description
Technical Field
The present invention relates to the field of computer vision, and in particular, to a method and an apparatus for determining target optical flow information, an electronic device, and a storage medium.
Background
In the field of computer vision, the optical flow field can be used for describing the position change relationship of each pixel point between two frames of images, namely describing the spatial motion information of an object. In the existing scheme for determining the compensated optical flow information, a moving object needs to be estimated in a situation where a camera is in motion shooting, that is, scene motion described by an optical flow field couples camera motion and object motion, so that the optical flow field needs to be compensated.
The current scheme for compensating the optical flow field is a rotation compensation method proposed by Bideau, in the method, assuming that the rotation parameters of a camera are known, corresponding optical flow information can be calculated according to the rotation parameters of the camera, and then the compensated optical flow information is determined based on the real optical flow information and the optical flow information caused by rotation. The compensation optical flow information determined based on the method is not accurate enough, the position information of the moving object cannot be accurately estimated according to the compensation optical flow information, and the motion state of the object cannot be accurately determined.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for determining target optical flow information, an electronic device, and a storage medium, which can obtain accurate target optical flow information to improve accuracy of determining position information of a moving object and improve accuracy of determining a motion state of the object.
The embodiment of the invention provides a method for determining target optical flow information, which comprises the following steps:
respectively determining an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images;
determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set;
determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set;
determining a second mapping information set of the object in the target image according to the position information set;
and determining the target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
Further, respectively determining an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images, comprising:
determining a target image, a first image and a second image from a plurality of frame images; the first image and the second image are respectively adjacent to the target image;
determining first pixel information of the object from the target image, determining second pixel information of the object from the first image, and determining third pixel information of the object from the second image;
determining first optical flow information from the first pixel information and the second pixel information;
determining second optical flow information from the first pixel information and the third pixel information;
taking the first optical flow information and the second optical flow information as optical flow information sets;
determining first camera pose information corresponding to the target image and the first image;
determining second camera position and posture information corresponding to the target image and the second image;
and taking the first camera pose information and the second camera pose information as a camera pose information set.
Further, determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set, including:
triangularization processing is carried out on the first pixel information, the second pixel information and the first camera pose information, and first position information corresponding to the object is determined;
triangularization processing is carried out on the first pixel information, the third pixel information and the second camera position and posture information, and second position information corresponding to the object is determined;
the first location information and the second location information are taken as a location information set.
Further, determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set, including:
determining mapping pixel information of the object in the first image according to the second position information and the first camera pose information;
determining mapping pixel information of the object in the second image according to the first position information and the second camera position and posture information;
mapping pixel information of the object in the first image and mapping pixel information of the object in the second image are taken as a first mapping information set.
Further, determining a second set of mapping information of the object in the target image based on the set of location information, comprising:
determining first mapping pixel information of the object in the target image according to the first position information;
determining second mapping pixel information of the object in the target image according to the second position information;
the first mapped pixel information and the second mapped pixel information are taken as a second set of mapping information.
Further, determining the target optical flow information from the set of optical flow information, the first set of mapping information, and the second set of mapping information, comprising:
determining first compensation information according to the optical flow information set and the first mapping information set;
determining second compensation information according to the optical flow information set and the mapping information set;
the first compensation information and the second compensation information are set as target optical flow information.
Accordingly, an embodiment of the present invention further provides an apparatus for determining target optical flow information, where the apparatus includes:
the optical flow information set and camera pose information set determining module is used for respectively determining an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images;
the position information set determining module is used for determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set;
the first mapping information set determining module is used for determining a first mapping information set of the object in two adjacent frames of images according to the camera pose information set and the position information set;
the second mapping information set determining module is used for determining a second mapping information set of the object in the target image according to the position information set;
and the target optical flow information determining module is used for determining the target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
Further, the location information set determination module includes:
the first position information determining module is used for triangulating the first pixel information, the second pixel information and the first camera pose information and determining first position information corresponding to the object;
and the second position information determining module is used for triangulating the first pixel information, the third pixel information and the second camera position and posture information and determining second position information corresponding to the object.
Accordingly, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for determining the target optical flow information.
Accordingly, an embodiment of the present invention further provides a computer-readable storage medium, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method for determining the target optical flow information.
The embodiment of the invention has the following beneficial effects:
according to the embodiment of the invention, the target optical flow information is determined according to the optical flow information and the camera pose information corresponding to the multi-frame images, so that the target optical flow information can be more accurate, the optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, the accuracy of determining the position information of the moving object can be improved, and the accuracy of determining the motion state of the object can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for determining target optical flow information according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for determining target optical flow information according to an embodiment of the present invention;
FIG. 3 is a flow chart for determining an optical flow information set and a camera pose information set according to an embodiment of the present invention;
FIG. 4 is a flowchart of determining a set of position information corresponding to an object in a target image according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for determining target optical flow information according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiment is only one embodiment of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An "embodiment" as referred to herein relates to a particular feature, structure, or characteristic that may be included in at least one implementation of the invention. In the description of the embodiments of the present invention, it should be understood that the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit indication of the number of technical features indicated. Thus, features defined as "first", "second", and "third" may explicitly or implicitly include one or more of the features. Moreover, the terms "first," "second," "third," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, apparatus, article, or device.
The present specification provides method steps as illustrated in the examples or flowcharts, but may include more or fewer steps based on routine or non-inventive labor. The order of steps recited in the embodiments is only one of many possible orders of execution and does not represent the only order of execution, and in actual execution, the steps may be performed sequentially or in parallel as in the embodiments or methods shown in the figures (e.g., in the context of parallel processors or multi-threaded processing).
A specific embodiment of a method for determining target optical flow information according to the present invention is described below, fig. 1 is a flowchart of the method for determining target optical flow information according to the embodiment of the present invention, and fig. 2 is a schematic diagram of the method for determining target optical flow information according to the embodiment of the present invention. Specifically, as shown in fig. 1 and 2, the method includes:
s101: and respectively determining an optical flow information set between the target image and the two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images.
In this specification embodiment, the two adjacent frame images may refer to two frame images adjacent to the target image.
In an alternative embodiment, a plurality of objects and position data corresponding to each object may be determined from the target image, and a target corresponding to each object and position data corresponding to each target may be determined from two frames of images adjacent to the target image, respectively, and an average value of the position data corresponding to the objects and the position data corresponding to the targets may be determined as optical flow information between the target image and the adjacent image. For example, when the image captured by the camera includes a plurality of trees and a plurality of traveling vehicles, the traveling vehicles may be determined as the objects from the target image, and the same vehicle may be determined as the object corresponding to the object in the target image from two frames of images adjacent to the target image.
For example, position data a of a plurality of objects is determined from the target image A1、a2And a3And determining position data B of the object from an image B adjacent to the object image1、b2And b3And determining position data C of the object from an image C adjacent to the object image1、c2And c3Further, the optical flow information f between the target image and the adjacent two frames of images can be determined by the following formula1And f2:
f1=[(b1-a1)+(b2-a2)+(b3-a3)]/3
f2=[(c1-a1)+(c2-a2)+(c3-a3)]/3
In another alternative embodiment, the optical flow information sets between the target image and the two adjacent frames of images and the camera pose information sets corresponding to the target image and the two adjacent frames of images may be determined respectively, wherein the two adjacent frames of images may refer to the two adjacent frames of images adjacent to the target image.
Fig. 3 is a flowchart of determining an optical flow information set and a camera pose information set according to an embodiment of the present invention, specifically, as shown in fig. 2 and 3, the method includes:
s301: determining a target image, a first image and a second image from a plurality of frame images; the first image and the second image are adjacent to the target image, respectively.
In the embodiment of the invention, three continuous frames of images can be determined from an image sequence containing a plurality of frames of images, and each frame of image is sequentially determined as a first image, a target image and a second image according to the sequence of each frame of image; or randomly determining one frame of image from the plurality of frames of images as a target image, determining the previous frame of image of the target image as a first image, and determining the next frame of image of the target image as a second image. The first image and the second image are the two adjacent frames of images.
In the embodiment of the invention, the first image can be the t-1 frame image, the target image can be the t frame image, the second image can be the t +1 frame image, and the object can be an object point.
S303: first pixel information of the object is determined from the target image, second pixel information of the object is determined from the first image, and third pixel information of the object is determined from the second image.
In an embodiment of the present invention, first pixel information of an object may be determined from a target image, second pixel information of the object may be determined from the first image, and third pixel information of the object may be determined from the second image. Specifically, as shown in FIG. 2, the camera may be located at CtDetermining an object point X in a target image obtained by position shooting, namely a t frame imagetFirst pixel information PtAt camera position Ct-1Determining an object point X in a first image obtained by position shooting, namely a t-1 frame imaget-1Second pixel information Pt-1At camera position Ct+1Determining an object point X in a second image obtained by position shooting, namely a t +1 frame imaget+1Third pixel information Pt+1. Note that the object point XtObject point Xt-1And object point Xt+1Are the corresponding positions of the same object point at different moments.
S305: first optical flow information is determined from the first pixel information and the second pixel information.
In the embodiment of the invention, the first pixel information P of the object in the target image can be usedtAnd second pixel information P of the object in the first imaget-1Determining first optical flow information f1. That is, the first optical flow information describes the mapping relationship between the corresponding pixel of the object point in the t-1 th frame image to the corresponding pixel in the t-1 th frame image. Specifically, the first optical flow information f may be determined using the following formula1:
f1=Pt-Pt-1
S307: second optical flow information is determined from the root first pixel information and the third pixel information.
In the embodiment of the invention, the first pixel information P of the object in the target image can be usedtAnd third pixel information P of the object in the second imaget+1Determining second optical flow information f2. That is, the second optical flow information describes the mapping relationship between the corresponding pixel of the object point in the t-th frame image to the corresponding pixel in the t + 1-th frame image. Specifically, the second optical-flow information f may be determined using the following formula2:
f2=Pt+1-Pt
S309: the first optical-flow information and the second optical-flow information are set as optical-flow information.
In the embodiment of the present invention, the first optical-flow information and the second optical-flow information described above may be set as the optical-flow information set.
S311: first camera pose information corresponding to the target image and the first image is determined.
In the embodiment of the invention, the position C of the camera when the target image is shot can be determinedtAnd the position C of the camera when taking the first imaget-1Determining first camera pose information T1I.e. first camera pose information T1A vector may be defined with the start of the camera position at time t and the end of the camera position at time t-1.
S313: and determining second camera position and posture information corresponding to the target image and the second image.
The inventionIn an embodiment, the position C of the camera when the target image is captured may be determined according to the positiontAnd the position C of the camera when taking the second imaget+1Determining second camera pose information T2I.e. second camera pose information T2The vector may be a vector having the start point of the position of the camera at time t and the end point of the position of the camera at time t + 1.
S315: and taking the first camera pose information and the second camera pose information as a camera pose information set.
In the embodiment of the present invention, the first camera pose information and the second camera pose information described above may be taken as a camera pose information set.
S103: and determining a position information set corresponding to the object in the target image according to the optical flow information set and the camera pose information set.
In the embodiment of the invention, the position information set corresponding to the object in the target image can be determined according to the first optical flow information, the second optical flow information, the first camera pose information and the second camera pose information. Fig. 4 is a flowchart of determining a position information set corresponding to an object in a target image according to an embodiment of the present invention, which is specifically shown in fig. 2 and fig. 4.
S401: triangularization processing is carried out on the first pixel information, the second pixel information and the first camera pose information, and first position information corresponding to the object is determined.
In the embodiment of the invention, the first pixel information P of the object in the target image can be usedtSecond pixel information P of the object in the first imaget-1First camera pose information T corresponding to the target image and the first image1Determining first position information P 'corresponding to the object't+1I.e. for the first pixel information PtSecond pixel information Pt-1And first camera pose information T1Triangularization processing is carried out to obtain first position information P 'corresponding to the object point't+1。
S403: and triangularizing the first pixel information, the third pixel information and the second camera position and posture information, and determining second position information corresponding to the object.
In the embodiment of the inventionTo obtain first pixel information P according to the object in the target imagetThird pixel information P of the object in the second imaget+1Second camera pose information T corresponding to the target image and the second image2Determining second position information P 'corresponding to the object't-1I.e. for the first pixel information PtThird pixel information Pt+1And second camera pose information T2Triangularization processing is carried out to obtain second position information P 'corresponding to the object point't-1。
S405: the first location information and the second location information are taken as a location information set.
In the embodiment of the present invention, the first location information and the second location information described above may be used as the location information set.
S105: and determining a first mapping information set of the object in the two adjacent frames of images according to the camera pose information set and the position information set.
In the embodiment of the invention, the second position information P 'can be obtained't-1And first camera pose information T1Determining the mapped pixel information P of the object in the first image1And from the first position information P't+1And second camera pose information T2Determining the mapped pixel information P of the object in the second image2Further mapping pixel information P of the object in the first image1And the mapped pixel information P of the object in the second image2As a first set of mapping information.
Specifically, using the position change data of the camera from time t to time t-1, the first pixel information P can be correctedtThird pixel information Pt+1And second camera pose information T2Performing triangulation processing, converting the obtained spatial position data corresponding to the object point into a camera coordinate system corresponding to the t-1 frame image, projecting the spatial position data, and determining second position information P 'corresponding to the object point't-1Mapped pixel information P in t-1 frame image1. Meanwhile, the first pixel information P can be obtained by using the position change data of the camera from the time t to the time t +1tSecond pixel information Pt-1And a first camera positionPosture information T1Performing triangulation to obtain spatial position data corresponding to the object point, converting the spatial position data into a camera coordinate system corresponding to the t +1 frame image, projecting the spatial position data, and determining first position information P 'corresponding to the object point't+1Mapped pixel information P in t +1 th frame image2。
S107: from the set of position information, a second set of mapping information of the object in the target image is determined.
In the embodiment of the invention, the first position information P 'can be obtained't+1Determining the mapping pixel information P of the object in the target image3And may be according to second position information P't-1Determining the mapping pixel information P of the object in the target image4Further mapping pixel information P of the object in the target image3And mapping pixel information P of the object in the target image4As a second set of mapping information.
In particular, the first pixel information P may be to betThird pixel information Pt+1And second camera pose information T2Performing triangulation processing, projecting the obtained spatial position data corresponding to the object point to a camera coordinate system corresponding to the t frame image, and determining second position information P 'corresponding to the object point't-1Mapped pixel information P in the t-th frame image3. At the same time, the first pixel information P may betSecond pixel information Pt-1And first camera pose information T1Performing triangulation, projecting the obtained spatial position data corresponding to the object point to a camera coordinate system corresponding to the t frame image, and determining first position information P 'corresponding to the object point't+1Mapped pixel information P in the t-th frame image4。
S109: and determining the target optical flow information according to the optical flow information set, the first mapping information set and the second mapping information set.
In the embodiment of the present invention, the first compensation information may be determined based on the optical flow information set and the first mapping information set, the second compensation information may be determined based on the optical flow information set and the mapping information set, and the first compensation information and the second compensation information may be set as the target optical flow information.
In a specific embodiment, the pixel information P in the first image may be mapped according to the object1And second pixel information P of the object in the first imaget-1Determining backward compensation optical flow field r1According to the mapping pixel information P of the object in the second image2And third pixel information P of the object in the second imaget+1Determining the forward compensation optical flow field r2. So as to compensate the backward direction optical flow field r1And forward compensating optical flow field r2As the first compensation information. The following formula can be used to determine the backward compensation optical flow field r1And forward compensating optical flow field r2:
r1=P1-Pt-1
r2=P2-Pt+1
In a specific embodiment, the first pixel information P of the object in the target image can be usedtAnd mapping pixel information P of the object in the target image3Determining a first projection residual r3According to the first pixel information P according to the object in the target imagetAnd mapping pixel information P of the object in the target image4Determining a second projection residual r4. The first projection residual r can be further processed3And a second projection residual r4As second compensation information. The following formula may be used to determine the first projection residual r3And a second projection residual r4:
r3=P3-Pt
r4=P4-Pt
In the embodiment of the present invention, the method for determining the target optical flow information described above may be implemented based on a target optical flow information determination model, and specifically, an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and two adjacent frames of images may be used as inputs of the target optical flow information determination model, and the target optical flow information determination model may output compensation optical flow information, that is, a backward compensation optical flow field r1Forward compensating optical flow field r2First projection residual r3And a second projection residual r4. In the target optical flow information determination model, the method steps described above may be implemented, and are not described here again.
By adopting the method for determining the target optical flow information provided by the embodiment of the invention, the target optical flow information is determined according to the optical flow information corresponding to the multi-frame image and the camera pose information. The optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, the accuracy of determining the position information of the moving object can be improved, and the accuracy of determining the motion state of the object can also be improved.
An embodiment of the present invention further provides a device for determining target optical flow information, and fig. 5 is a schematic structural diagram of the device for determining target optical flow information, as shown in fig. 5, the device includes:
an optical flow information set and camera pose information set determining module 501, configured to determine an optical flow information set between the target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images respectively;
a position information set determining module 503, configured to determine, according to the optical flow information set and the camera pose information set, a position information set corresponding to the object in the target image;
a first mapping information set determining module 505, configured to determine, according to the camera pose information set and the position information set, a first mapping information set of the object in two adjacent frames of images;
a second mapping information set determining module 507, configured to determine a second mapping information set of the object in the target image according to the position information set;
a target optical flow information determining module 509, configured to determine the target optical flow information according to the optical flow information set, the first mapping information set, and the second mapping information set.
By adopting the device for determining the target optical flow information, which is provided by the embodiment of the invention, the target optical flow information can be determined according to the optical flow information and the camera pose information corresponding to the multi-frame images, so that the target optical flow information can be more accurate, and further the optical flow information corresponding to the multi-frame images can be compensated by utilizing the target optical flow information, thereby improving the accuracy of determining the position information of a moving object and the accuracy of determining the motion state of the object.
In this embodiment of the present invention, the position information set determining module 503 includes:
the first position information determining module is used for triangulating the first pixel information, the second pixel information and the first camera pose information and determining first position information corresponding to the object;
and the second position information determining module is used for triangulating the first pixel information, the third pixel information and the second camera position and posture information and determining second position information corresponding to the object.
The device and method embodiments in the embodiments of the invention are based on the same inventive concept.
The present invention further provides an electronic device, which can be disposed in a server to store at least one instruction, at least one program, a set of codes, or a set of instructions related to a method for determining a target optical flow information in the method embodiment, where the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded from the memory and executed to implement the method for determining a target optical flow information.
The present invention further provides a storage medium, which may be disposed in a server to store at least one instruction, at least one program, a set of codes, or a set of instructions related to a method for determining a target optical flow information in the method embodiment, where the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the method for determining the target optical flow information.
Optionally, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to, a storage medium including: various media that can store program codes, such as a usb disk, a Read-only Memory (ROM), a removable hard disk, a magnetic disk, or an optical disk.
As can be seen from the embodiments of the method, the apparatus, the electronic device, or the storage medium for determining target optical flow information provided by the embodiments of the present invention, the target optical flow information may be determined according to optical flow information and camera pose information corresponding to multiple frames of images, so that the target optical flow information may be more accurate, and further, the optical flow information corresponding to multiple frames of images may be compensated by using the target optical flow information, so that accuracy of determining position information of a moving object may be improved, and accuracy of determining a motion state of the object may also be improved.
It should be noted that: the foregoing descriptions of the embodiments of the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims. In some cases, the actions or steps recited in the claims can be performed in the order of execution in different embodiments and achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown or connected to enable the desired results to be achieved, and in some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment is described with emphasis on differences from other embodiments. Especially, for the embodiment of the device, since it is based on the embodiment similar to the method, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
Claims (10)
1. A method for determining target optical flow information, comprising:
respectively determining an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images;
determining a position information set corresponding to an object in the target image according to the optical flow information set and the camera pose information set;
determining a first mapping information set of the object in the two adjacent frames of images according to the camera pose information set and the position information set;
determining a second mapping information set of the object in the target image according to the position information set;
determining target optical flow information from the set of optical flow information, the first set of mapping information, and the second set of mapping information.
2. The method of claim 1, wherein the determining the sets of optical flow information between the target image and the two adjacent frames of images and the sets of camera pose information corresponding to the target image and the two adjacent frames of images respectively comprises:
determining a target image, a first image and a second image from a plurality of frame images; the first image and the second image are adjacent to the target image respectively;
determining first pixel information of the object from the target image, and second pixel information of the object from the first image, and third pixel information of the object from the second image;
determining first optical flow information from the first pixel information and the second pixel information;
determining second optical flow information from the first pixel information and the third pixel information;
determining a first optical flow information set based on the first optical flow information and the second optical flow information;
determining first camera pose information corresponding to the target image and the first image;
determining second camera position and posture information corresponding to the target image and the second image;
using the first camera pose information and the second camera pose information as the set of camera pose information.
3. The method of claim 2, wherein determining the set of position information corresponding to the object in the target image from the set of optical flow information and the set of camera pose information comprises:
triangularization processing is carried out on the first pixel information, the second pixel information and the first camera pose information, and first position information corresponding to the object is determined;
triangularization processing is carried out on the first pixel information, the third pixel information and the second camera position and posture information, and second position information corresponding to the object is determined;
and using the first position information and the second position information as the position information set.
4. The method of claim 3, wherein determining a first set of mapping information of the object in the two adjacent frames of images from the set of camera pose information and the set of position information comprises:
determining mapping pixel information of the object in the first image according to the second position information and the first camera pose information;
determining mapping pixel information of the object in the second image according to the first position information and the second camera pose information;
and taking the mapping pixel information of the object in the first image and the mapping pixel information of the object in the second image as the first mapping information set.
5. The method of claim 3, wherein determining a second set of mapping information for the object in the target image based on the set of location information comprises:
determining first mapping pixel information of the object in the target image according to the first position information;
determining second mapping pixel information of the object in the target image according to the second position information;
the first mapped pixel information and the second mapped pixel information are taken as the second set of mapping information.
6. The method of claim 1, wherein determining target optical flow information from the set of optical flow information, the first set of mapping information, and the second set of mapping information comprises:
determining first compensation information according to the optical flow information set and the first mapping information set;
determining second compensation information according to the optical flow information set and the mapping information set;
and using the first compensation information and the second compensation information as the target optical flow information.
7. An apparatus for determining target optical flow information, comprising:
the optical flow information set and camera pose information set determining module is used for respectively determining an optical flow information set between a target image and two adjacent frames of images and a camera pose information set corresponding to the target image and the two adjacent frames of images;
the position information set determining module is used for determining a position information set corresponding to an object in the target image according to the optical flow information set and the camera pose information set;
a first mapping information set determining module, configured to determine, according to the camera pose information set and the position information set, a first mapping information set of the object in the two adjacent frames of images;
a second mapping information set determining module, configured to determine a second mapping information set of the object in the target image according to the position information set;
a target optical flow information determination module for determining target optical flow information according to the optical flow information set, the first mapping information set, and the second mapping information set.
8. The apparatus of claim 7, wherein the location information set determining module comprises:
the first position information determining module is used for triangulating the first pixel information, the second pixel information and the first camera pose information and determining first position information corresponding to the object;
and the second position information determining module is used for triangulating the first pixel information, the third pixel information and the second camera position and posture information and determining second position information corresponding to the object.
9. An electronic device comprising a processor and a memory, wherein the memory has stored therein at least one instruction, at least one program, set of codes, or set of instructions that are loaded and executed by the processor to implement the method of determining target optical flow information of any of claims 1-6.
10. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method of determining target optical flow information according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110146441.2A CN112967228B (en) | 2021-02-02 | Determination method and device of target optical flow information, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110146441.2A CN112967228B (en) | 2021-02-02 | Determination method and device of target optical flow information, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112967228A true CN112967228A (en) | 2021-06-15 |
CN112967228B CN112967228B (en) | 2024-04-26 |
Family
ID=
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228834A (en) * | 2022-12-20 | 2023-06-06 | 阿波罗智联(北京)科技有限公司 | Image depth acquisition method and device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120072035A1 (en) * | 2010-09-17 | 2012-03-22 | Steven Nielsen | Methods and apparatus for dispensing material and electronically tracking same |
AU2012250766A1 (en) * | 2011-05-02 | 2013-12-19 | Certusview Technologies, Llc | Marking methods, apparatus and systems including optical flow-based dead reckoning features |
CN108230437A (en) * | 2017-12-15 | 2018-06-29 | 深圳市商汤科技有限公司 | Scene reconstruction method and device, electronic equipment, program and medium |
US20180218618A1 (en) * | 2016-10-11 | 2018-08-02 | Insitu, Inc. | Method and apparatus for target relative guidance |
CN111127522A (en) * | 2019-12-30 | 2020-05-08 | 亮风台(上海)信息科技有限公司 | Monocular camera-based depth optical flow prediction method, device, equipment and medium |
CN111178277A (en) * | 2019-12-31 | 2020-05-19 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111583716A (en) * | 2020-04-29 | 2020-08-25 | 浙江吉利汽车研究院有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and storage medium |
CN111598927A (en) * | 2020-05-18 | 2020-08-28 | 京东方科技集团股份有限公司 | Positioning reconstruction method and device |
CN112116655A (en) * | 2019-06-20 | 2020-12-22 | 北京地平线机器人技术研发有限公司 | Method and device for determining position information of image of target object |
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120072035A1 (en) * | 2010-09-17 | 2012-03-22 | Steven Nielsen | Methods and apparatus for dispensing material and electronically tracking same |
AU2012250766A1 (en) * | 2011-05-02 | 2013-12-19 | Certusview Technologies, Llc | Marking methods, apparatus and systems including optical flow-based dead reckoning features |
US20180218618A1 (en) * | 2016-10-11 | 2018-08-02 | Insitu, Inc. | Method and apparatus for target relative guidance |
CN108230437A (en) * | 2017-12-15 | 2018-06-29 | 深圳市商汤科技有限公司 | Scene reconstruction method and device, electronic equipment, program and medium |
CN112116655A (en) * | 2019-06-20 | 2020-12-22 | 北京地平线机器人技术研发有限公司 | Method and device for determining position information of image of target object |
CN111127522A (en) * | 2019-12-30 | 2020-05-08 | 亮风台(上海)信息科技有限公司 | Monocular camera-based depth optical flow prediction method, device, equipment and medium |
CN111178277A (en) * | 2019-12-31 | 2020-05-19 | 支付宝实验室(新加坡)有限公司 | Video stream identification method and device |
CN111583716A (en) * | 2020-04-29 | 2020-08-25 | 浙江吉利汽车研究院有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and storage medium |
CN111598927A (en) * | 2020-05-18 | 2020-08-28 | 京东方科技集团股份有限公司 | Positioning reconstruction method and device |
Non-Patent Citations (2)
Title |
---|
JUNQING MIAO: ""Exploiting Pose Mask Features For Video Action Recognition"", 《2019 IEEE 5TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC)》 * |
马静: ""基于姿态和骨架信息的行为识别方法研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228834A (en) * | 2022-12-20 | 2023-06-06 | 阿波罗智联(北京)科技有限公司 | Image depth acquisition method and device, electronic equipment and storage medium |
CN116228834B (en) * | 2022-12-20 | 2023-11-03 | 阿波罗智联(北京)科技有限公司 | Image depth acquisition method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4982410B2 (en) | Space movement amount calculation apparatus and method | |
US20140253679A1 (en) | Depth measurement quality enhancement | |
KR20180105876A (en) | Method for tracking image in real time considering both color and shape at the same time and apparatus therefor | |
JP6349418B2 (en) | Object positioning by high-precision monocular movement | |
CN107030699A (en) | Position and attitude error modification method and device, robot and storage medium | |
CN111445526A (en) | Estimation method and estimation device for pose between image frames and storage medium | |
CN107862733B (en) | Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm | |
CN111899282A (en) | Pedestrian trajectory tracking method and device based on binocular camera calibration | |
CN112991401B (en) | Vehicle running track tracking method and device, electronic equipment and storage medium | |
CN110599586A (en) | Semi-dense scene reconstruction method and device, electronic equipment and storage medium | |
CN110827321A (en) | Multi-camera cooperative active target tracking method based on three-dimensional information | |
CN111507132A (en) | Positioning method, device and equipment | |
CN108340371B (en) | Target following point positioning method and system | |
JP6431404B2 (en) | Attitude estimation model generation apparatus and attitude estimation apparatus | |
CN104471436B (en) | The method and apparatus of the variation of imaging scale for computing object | |
CN110866873A (en) | Highlight elimination method and device for endoscope image | |
CN107945166B (en) | Binocular vision-based method for measuring three-dimensional vibration track of object to be measured | |
CN112967228A (en) | Method and device for determining target optical flow information, electronic equipment and storage medium | |
CN112967228B (en) | Determination method and device of target optical flow information, electronic equipment and storage medium | |
CN110111341B (en) | Image foreground obtaining method, device and equipment | |
CN109859313B (en) | 3D point cloud data acquisition method and device, and 3D data generation method and system | |
CN111179408B (en) | Three-dimensional modeling method and equipment | |
WO2019058487A1 (en) | Three-dimensional reconstructed image processing device, three-dimensional reconstructed image processing method, and computer-readable storage medium having three-dimensional reconstructed image processing program stored thereon | |
CN109766012B (en) | Sight line calculation method and device | |
CN112991445A (en) | Model training method, attitude prediction method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |