CN107749069A - Image processing method, electronic equipment and image processing system - Google Patents

Image processing method, electronic equipment and image processing system Download PDF

Info

Publication number
CN107749069A
CN107749069A CN201710903510.3A CN201710903510A CN107749069A CN 107749069 A CN107749069 A CN 107749069A CN 201710903510 A CN201710903510 A CN 201710903510A CN 107749069 A CN107749069 A CN 107749069A
Authority
CN
China
Prior art keywords
image
field pictures
characteristic point
point
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710903510.3A
Other languages
Chinese (zh)
Other versions
CN107749069B (en
Inventor
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710903510.3A priority Critical patent/CN107749069B/en
Publication of CN107749069A publication Critical patent/CN107749069A/en
Application granted granted Critical
Publication of CN107749069B publication Critical patent/CN107749069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

Present disclose provides a kind of image processing method, including, by least two field pictures of single camera acquisition at different moments, wherein, at least two field pictures at least part content is corresponding;And acquisition parameter during based on the collection two field pictures, determine the depth of the characteristic point of the corresponding part of content in the two field pictures, wherein, the acquisition parameter includes change in location parameter, when the change in location parameter is used to determine IMAQ twice, the change in location of the camera.The disclosure additionally provides a kind of electronic equipment and a kind of image processing system.

Description

Image processing method, electronic equipment and image processing system
Technical field
This disclosure relates to a kind of image processing method, electronic equipment and image processing system.
Background technology
The application based on three dimensions occurred at present on artificial intelligence field, mobile phone and computer is more and more, Meanwhile also increasingly increased using the scheme of picture depth data assistant images or Video processing.But it is possible to depth number is obtained in time According to equipment it is but still or a small number of, this hinders promoting the use of for this certain applications and scheme significantly.Use TOF sensor Or obtain picture depth in real time using dual camera and need to improve the hardware of capture apparatus, cost can be increased, and a lot It can not be used on existing equipment.
The content of the invention
An aspect of this disclosure provides a kind of image processing method, including, when obtaining different by single camera At least two field pictures carved, wherein, at least two field pictures at least part content is corresponding, and based on collection two frame Acquisition parameter during image, the depth of the characteristic point of the corresponding part of content in the two field pictures is determined, wherein, it is described Acquisition parameter includes change in location parameter, when the change in location parameter is used to determine IMAQ twice, the camera Change in location.
Alternatively, the acquisition parameter during two field pictures based on collection, determines the content in the two field pictures The depth of the characteristic point of corresponding part includes, based on gather the two field pictures when acquisition parameter, determine two frame Rotating vector in image between the collection direction of the first image and the collection direction of the second image, wherein, the acquisition parameter Also include the focal length of the camera, based on the rotating vector, handle the part that content is corresponding in described first image Characteristic point, and based on the characteristic point after processing, the characteristic point in the second image corresponding with the characteristic point and collection ginseng Number, determine the depth of the characteristic point of the corresponding part of content in the two field pictures.
Alternatively, the acquisition parameter during two field pictures based on collection, determines the first figure in the two field pictures Rotating vector between the collection direction of picture and the collection direction of the second image includes, and that is chosen from the two field pictures is multigroup Corresponding characteristic point, based in the two field pictures the first image collection direction and the second image collection direction between The characteristic point obtained in rotating vector processing described first image after the characteristic point of the corresponding part of content and second figure The common intersection point of the line of corresponding characteristic point as in, determine the optimal solution of the rotating vector.
Alternatively, the characteristic point in the characteristic point based on after processing, the second image corresponding with the characteristic point with And acquisition parameter, determining the depth of the characteristic point of the corresponding part of the content in the two field pictures includes, and determines that first is special Point is levied, wherein, position of the fisrt feature o'clock in the first image after rotated Vector Processing exists with the fisrt feature point Position in second image is identical, determines second feature point, wherein, the second feature o'clock rotated vector in the first image Position of the position from the second feature o'clock in the second image after processing is different, according to the acquisition parameter, it is determined that twice The change in location of camera during IMAQ, and the position according to fisrt feature point and second feature point in two field pictures Put, and the distance of camera movement and direction, determine depth of the second feature point in two field pictures.
Alternatively, methods described also includes, and based on the depth of the characteristic point in image, to feature points clustering, identifies image The object of middle different depth.
Another aspect of the disclosure provides a kind of electronic equipment, including, processor, and memory, store thereon There is computer-readable instruction, when the instruction is executed by processor so that processor, obtained at different moments by single camera At least two field pictures, wherein, at least two field pictures at least part content is corresponding, and based on gathering the two frames figure As when acquisition parameter, determine the depth of the characteristic point of the corresponding part of content in the two field pictures, wherein, it is described to adopt Collection parameter includes change in location parameter, when the change in location parameter is used to determine IMAQ twice, the position of the camera Put change.
Alternatively, the acquisition parameter when processor is based on the collection two field pictures, is determined in the two field pictures The depth of characteristic point of the corresponding part of content include, based on acquisition parameter when gathering the two field pictures, determine institute The rotating vector between the collection direction of the first image and the collection direction of the second image in two field pictures is stated, wherein, it is described to adopt Collection parameter also includes the focal length of the camera, and based on the rotating vector, it is corresponding to handle content in described first image Partial characteristic point, and based on the characteristic point after processing, the characteristic point in the second image corresponding with the characteristic point and Acquisition parameter, determine the depth of the characteristic point of the corresponding part of content in the two field pictures.
Alternatively, the acquisition parameter when processor is based on the collection two field pictures, is determined in the two field pictures Rotating vector between the collection direction of first image and the collection direction of the second image includes, and is chosen from the two field pictures Multigroup corresponding characteristic point, the collection direction based on collection direction and the second image of the first image in the two field pictures Between rotating vector processing described first image in the corresponding part of content characteristic point after the characteristic point that obtains with it is described The common intersection point of the line of corresponding characteristic point in second image, determine the optimal solution of the rotating vector.
Alternatively, the processor is based on the characteristic point after processing, the spy in the second image corresponding with the characteristic point Sign point and acquisition parameter, determining the depth of the characteristic point of the corresponding part of the content in the two field pictures includes, it is determined that Fisrt feature point, wherein, position of the fisrt feature o'clock in the first image after rotated Vector Processing is special with described first Position of the sign o'clock in the second image is identical, determines second feature point, wherein, the second feature o'clock is in the first image through rotation Position of the position from the second feature o'clock in the second image after steering volume processing is different, according to the acquisition parameter, really The change in location of camera during fixed IMAQ twice, and according to fisrt feature point and second feature point in two field pictures In position, and the camera movement distance and direction, determine depth of the second feature point in two field pictures.
Another aspect of the disclosure provides a kind of image processing system, including, acquisition module, for being taken the photograph by single As at least two field pictures of head acquisition at different moments, wherein, at least two field pictures at least part content is corresponding, and place Manage module, for based on gather the two field pictures when acquisition parameter, determine that the content in the two field pictures is corresponding The depth of partial characteristic point, wherein, the acquisition parameter includes change in location parameter, and the change in location parameter is used to determine Twice during IMAQ, the change in location of the camera.
Another aspect of the present disclosure provides a kind of non-volatile memory medium, is stored with computer executable instructions, institute Instruction is stated to be used to realize method as described above when executed.
Another aspect of the present disclosure provides a kind of computer program, and the computer program includes the executable finger of computer Order, the instruction are used to realize method as described above when executed.
Brief description of the drawings
In order to be more fully understood from the disclosure and its advantage, referring now to the following description with reference to accompanying drawing, wherein:
Fig. 1 diagrammatically illustrates the image processing method, electronic equipment and image processing system according to the embodiment of the present disclosure Application scenarios;
Fig. 2 diagrammatically illustrates the flow chart of the image processing method according to the embodiment of the present disclosure;
Fig. 3 diagrammatically illustrates to be joined according to the embodiment of the present disclosure based on collection when gathering the two field pictures Number, determine the flow chart of the depth of the characteristic point of the corresponding part of content in the two field pictures;
Fig. 4 A diagrammatically illustrate the schematic diagram of the multigroup corresponding characteristic point of selection according to the embodiment of the present disclosure;
Fig. 4 B diagrammatically illustrate the schematic diagram of the camera collection image according to the embodiment of the present disclosure;
Fig. 4 C are diagrammatically illustrated according to the embodiment of the present disclosure when camera is only occurring to translate and nothing times at different moments When what rotation occurs, each point meets at the schematic diagram of any in the two continuous extended lines of moment imaging in space;
Fig. 5 A are diagrammatically illustrated according to the embodiment of the present disclosure based on the characteristic point after processing and the characteristic point Characteristic point and acquisition parameter in corresponding second image, determine the spy of the corresponding part of content in the two field pictures Levy the flow chart of the depth of point;
Fig. 5 B are diagrammatically illustrated according to the embodiment of the present disclosure according to fisrt feature point and second feature o'clock in two frame figures Position as in, and the distance of camera movement and direction, determine depth of the second feature point in two field pictures The schematic diagram of the method for degree;
Fig. 6 diagrammatically illustrates the flow chart of the image processing method according to another embodiment of the disclosure;
Fig. 7 diagrammatically illustrates the block diagram of the image processing system according to the embodiment of the present disclosure;
Fig. 8 diagrammatically illustrates the block diagram of the processing module according to the embodiment of the present disclosure;
Fig. 9 diagrammatically illustrates the block diagram of the second determination sub-module according to the embodiment of the present disclosure;
Figure 10 diagrammatically illustrates the block diagram of the image processing system according to another embodiment of the disclosure;And
Figure 11 diagrammatically illustrates the block diagram of the electronic equipment according to the embodiment of the present disclosure.
Embodiment
Hereinafter, it will be described with reference to the accompanying drawings embodiment of the disclosure.However, it should be understood that these descriptions are simply exemplary , and it is not intended to limit the scope of the present disclosure.In addition, in the following description, the description to known features and technology is eliminated, with Avoid unnecessarily obscuring the concept of the disclosure.
Term as used herein is not intended to limit the disclosure just for the sake of description specific embodiment.Use herein Term " comprising ", "comprising" etc. indicate the presence of the feature, step, operation and/or part, but it is not excluded that in the presence of Or addition one or more other features, step, operation or parts.
All terms (including technology and scientific terminology) as used herein have what those skilled in the art were generally understood Implication, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification Implication, without should by idealization or it is excessively mechanical in a manner of explain.
, in general should be according to this using in the case of being similar to that " in A, B and C etc. at least one " is such and stating Art personnel are generally understood that the implication of the statement to make an explanation (for example, " having system at least one in A, B and C " Should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or System with A, B, C etc.).Using in the case of being similar to that " in A, B or C etc. at least one " is such and stating, it is general come Say be generally understood that the implication of the statement to make an explanation (for example, " having in A, B or C at least according to those skilled in the art The system of one " should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or system etc. with A, B, C).It should also be understood by those skilled in the art that substantially arbitrarily represent two or more The adversative conjunction and/or phrase of optional project, either in specification, claims or accompanying drawing, shall be construed as Give including one of these projects, the possibility of these projects either one or two projects.For example, " A or B " should for phrase It is understood to include " A " or " B " or " A and B " possibility.
Shown in the drawings of some block diagrams and/or flow chart.It should be understood that some sides in block diagram and/or flow chart Frame or its combination can be realized by computer program instructions.These computer program instructions can be supplied to all-purpose computer, The processor of special-purpose computer or other programmable data processing units, so as to which these instructions can be with when by the computing device Create the device for realizing function/operation illustrated in these block diagrams and/or flow chart.
Therefore, the technology of the disclosure can be realized in the form of hardware and/or software (including firmware, microcode etc.).Separately Outside, the technology of the disclosure can take the form of the computer program product on the computer-readable medium for being stored with instruction, should Computer program product is available for instruction execution system use or combined command execution system to use.In the context of the disclosure In, computer-readable medium can be the arbitrary medium that can include, store, transmit, propagate or transmit instruction.For example, calculate Machine computer-readable recording medium can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, device or propagation medium. The specific example of computer-readable medium includes:Magnetic memory apparatus, such as tape or hard disk (HDD);Light storage device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication link.
Embodiment of the disclosure provide it is a kind of be used to calculating image processing method, the image processing system of picture depth with And electronic equipment.This method includes image acquisition process and determines characteristic point depth process.In image acquisition process, pass through list One camera obtains at least two field pictures at different moments, wherein at least two field pictures at least part content is corresponding. During determining characteristic point depth, acquisition parameter during two field pictures based on collection, the content in the two field pictures is determined The depth of the characteristic point of corresponding part.Picture depth is calculated using this method, reduces the limit of the motion mode of camera System, to picture depth solving precision in tolerance interval, is suitably disposed on portable equipment.In addition, on the one hand, this method by In the hardware condition that need not improve collection image, therefore application device does not limit, and on the other hand, this method need not obtain Continuous many two field pictures are taken, reduce amount of calculation.
Fig. 1 diagrammatically illustrates image processing method, electronic equipment and image procossing system in accordance with an embodiment of the present disclosure The application scenarios of system.
As shown in figure 1, at the first moment, capture apparatus is located at position 10, and at the second moment, capture apparatus is located at position 20, wherein, capture apparatus is different with camera site with the shooting angle of position 20 in position 10.Such as the example in Fig. 1, in place When putting 10, the base of capture apparatus is parallel with X-direction, is s in the distance of X-direction distance and position 10 at position 20, and by In a variety of causes, the angle of capture apparatus may have occurred change, for example, have rotated certain angle to direction as depicted.
The image that capture apparatus obtains at position 10 is image 11, and the image obtained at position 20 is image 21, its In, image 11 has that partial content is corresponding with image 21, such as the example in Fig. 1, image 11 are relative with the pavilion in image 21 Should.
In accordance with an embodiment of the present disclosure, the picture depth of corresponding content is calculated by image 11 and image 21, such as counted Calculate the picture depth in pavilion in scene, the picture depth in the pavilion can be the picture depth in image 11 can also be Picture depth in image 21.
It is understood that capture apparatus can be any equipment with collection image function, such as mobile phone, flat board electricity Brain, video camera.Position 10 and position 20 simply schematically show that position 10 can be optional position, and capture apparatus can also select Arbitrary shooting angle is selected, likewise, position 20 can be the optional position different from position 10, its capture apparatus posture can be with It is identical from position 10 can also be different with position 10.At least two field pictures can be at least two images of capture apparatus collection, Can also be at least two field pictures that obtain from the video of capture apparatus shooting, or at least two otherwise obtained Two field picture.According to the embodiment of the present disclosure, as long as comprising at least corresponding partial content in the two field pictures of shooting
According to the embodiment of the present disclosure, by obtaining at least two field pictures at different moments, wherein, at least two field pictures are at least Partial content is corresponding, and based on different acquisition parameter when gathering the two field pictures, such as change in location parameter and rotation Steering volume, determine the image processing method at least portion of the depth of the characteristic point of the corresponding part of content in the two field pictures Point solve calculate in the prior art picture depth is high to hardware device requirement, computationally intensive, camera motion mode have compared with The problem of more limitations.
Fig. 2 diagrammatically illustrates the flow chart of image processing method in accordance with an embodiment of the present disclosure.
As shown in Fig. 2 this method includes operation S210 and S220.
S210 is being operated, by least two field pictures of single camera acquisition at different moments, wherein, at least two frames Image at least part content is corresponding.
Operation S220, based on gather the two field pictures when acquisition parameter, determine the content in the two field pictures The depth of the characteristic point of corresponding part, wherein, the acquisition parameter includes change in location parameter, the change in location parameter During for determining IMAQ twice, the change in location of the camera.
The hardware condition of collection image need not be improved by calculating picture depth by this method, it is not required that limitation camera Motion mode, while reduce amount of calculation, and the computational accuracy of picture depth is preferable.
According to the embodiment of the present disclosure, single camera represents that the image processing method only needs a camera, it is not necessary to As the method for calculating picture depth in the prior art needs dual camera or multiple cameras.The single camera can be to appoint What possesses the equipment of image collecting function, such as mobile phone, camera or video camera.
According to the embodiment of the present disclosure, by least two field pictures of single camera acquisition at different moments, wherein two frame figures As being single camera in the image that diverse location shoot or the image that obtains in the video that diverse location is shot, Can also be the image of the diverse location collection otherwise obtained, the image no matter obtained in which way, it has part Content is corresponding.
According to the embodiment of the present disclosure, when gathering the two field pictures, camera possesses acquisition parameter, and acquisition parameter includes taking the photograph As the change in location parameter of head, during for determining IMAQ twice, the change in location of camera.For example, the shooting in Fig. 1 is set During standby IMAQ twice, the position of camera has changed to position 20 by position 10, its change in location parameter include it is mobile away from From s and moving direction, or the change in location parameter determined by the coordinate of position 10 and position 20.
According to the embodiment of the present disclosure, described image processing method can be according to the two field pictures collected, and collection two Acquisition parameter during two field picture, determine the depth of the characteristic point of the corresponding part of content in two field pictures.
Illustrate with reference to Fig. 3 according to the two field pictures collected, and acquisition parameter during collection two field pictures, it is determined that The embodiment of the depth of the characteristic point of the corresponding part of content in two field pictures.It should be noted that Fig. 3 can be more preferable The explanation disclosure image processing method, but the disclosure is not limited to the embodiment that Fig. 3 is illustrated.
Fig. 3 diagrammatically illustrates to be joined according to the embodiment of the present disclosure based on collection when gathering the two field pictures Number, determine the flow chart of the depth of the characteristic point of the corresponding part of content in the two field pictures.
As shown in figure 3, this method includes operation S310, S320 and S330.
Operation S310, based on gather the two field pictures when acquisition parameter, determine the first figure in the two field pictures Rotating vector between the collection direction of picture and the collection direction of the second image, wherein, the acquisition parameter also includes described take the photograph As the focal length of head.
In operation S320, based on the rotating vector, the feature of the corresponding part of content in processing described first image Point.
Operation S330, based on the characteristic point after processing, the characteristic point in the second image corresponding with the characteristic point with And acquisition parameter, determine the depth of the characteristic point of the corresponding part of content in the two field pictures.
The depth data of image is calculated by this method, reduces the limitation to motion mode, and reduces amount of calculation, Depth data can be obtained in real time.
According to the embodiment of the present disclosure, in operation S310, the collection direction of the first image in the two field pictures and the is determined Rotating vector between the collection direction of two images includes, based on gather the two field pictures when acquisition parameter, it is determined that described Rotating vector in two field pictures between the collection direction of the first image and the collection direction of the second image.Specifically, camera The coordinate of camera in space is (x when gathering the first image1, y1, z1), camera coordinate is (x when gathering the second image2, y2, z2), the rotating vector (α, beta, gamma) of camera is determined based on above-mentioned coordinate.
According to the embodiment of the present disclosure, it is described based on gather the two field pictures when acquisition parameter, determine the two frames figure Rotating vector as between the collection direction of the first image and the collection direction of the second image includes, from the two field pictures The multigroup corresponding characteristic point chosen, collection direction and the collection of the second image based on the first image in the two field pictures The characteristic point obtained in rotating vector processing described first image between direction after the characteristic point of the corresponding part of content with The common intersection point of the line of corresponding characteristic point in second image, determine the optimal solution of the rotating vector.This method passes through Calculate the optimal solution for determining the rotating vector so that just accurate rotating vector can be obtained without measurement, so as to improve The computational accuracy of picture depth.Illustrated with reference to Fig. 4 A, Fig. 4 B and Fig. 4 C embodiment illustrated.
According to the embodiment of the present disclosure, the two field pictures are respectively at different moments, camera gathers direction first The first image collected and the second image collected in the second collection direction, wherein the first image and the second image have part Content is corresponding.Multigroup corresponding characteristic point is chosen in corresponding content from the two field pictures.
Fig. 4 A diagrammatically illustrate the schematic diagram of the multigroup corresponding characteristic point of selection according to the embodiment of the present disclosure.
As shown in Figure 4 A, three groups of characteristic point m1 corresponding in the first image and the second image and m2, n1 and n2 and p1 with P2, wherein m1 are corresponding with m2, and n1 is corresponding with n2 and p1 and p2 is corresponding, in practical operation, can be selected according to different situations Multigroup corresponding characteristic point, the disclosure are not construed as limiting for the group number of selection.
According to the embodiment of the present disclosure, camera when gathering two field pictures at different moments, because a variety of causes camera can Certain angle can be have rotated.For example, the second collection direction is relative to the first collection direction, camera rotating vector R (α, beta, gamma).
According to the embodiment of the present disclosure, collection direction and the collection of the second image based on the first image in the two field pictures Rotating vector between direction handles the embodiment of the characteristic point for the part that content is corresponding in described first image such as Under.
Fig. 4 B diagrammatically illustrate the schematic diagram of the camera collection image according to the embodiment of the present disclosure.
As shown in Figure 4 B, plane where camera is X/Y plane, and at the first moment, camera is in the first collection direction collection To the first image M, object point A corresponding picture points in the first image M are picture point A1, and picture point A1 is in the coordinate system of shooting head plane Coordinate be (x1, y1).At the second moment, camera moves certain position, and for some reason, camera have rotated Certain angle, camera collect the second image N in the second collection direction, and object point A corresponding picture points in the second image N are picture The coordinate of point A2, picture point A2 in the coordinate system of shooting head plane is (x2, y2), camera rotating vector is R (α, beta, gamma).
According to the embodiment of the present disclosure, the part that content is corresponding in the first image (M images) is handled based on rotating vector Characteristic point, the processing include, by the characteristic point of the corresponding part of content in the first image (M images), such as picture point A1, rotation Turn the rotating vector R (α, beta, gamma), to compensate due to image point displacement caused by the rotation of camera, make picture point A1 and picture point A2 The characteristic point of the corresponding part of middle content only because camera translate caused by displacement, because camera rotation caused by position Move.Picture point A1 (x1, y1) coordinate after rotating vector R (α, beta, gamma) is changed into A1 ' (x2', y2'), wherein,
Then For the collection direction based on the first image in the two field pictures The characteristic point for the part that content is corresponding in rotating vector processing described first image between the collection direction of the second image Result.
When camera occurs translation only occurs at different moments without any rotation, in space each point two moment institute into As continuous extended line will be met at a bit.
Fig. 4 C are diagrammatically illustrated according to the embodiment of the present disclosure when camera is only occurring to translate and nothing times at different moments When what rotation occurs, each point meets at the schematic diagram of any in the two continuous extended lines of moment imaging in space.
As shown in Figure 4 C, A1 ' A2, B1 ' B2 and C1 ' C2 intersect at point P.As described above, based in the two field pictures the Content is relative in rotating vector processing described first image between the collection direction of one image and the collection direction of the second image The characteristic point in the first image after the characteristic point for the part answered relative to characteristic point corresponding in the second image only there occurs Translation and without any rotation, therefore characteristic point A1 ' A2, B1 ' B2 and C1 ' C2 after handling intersect at a point, if intersection point is P (x0, y0)。
According to the embodiment of the present disclosure, due to the influence of the factors such as measurement accuracy, cause certain error be present.Error can be byRepresent, using the minimum value of nonlinear optimization calculation error, try to achieve error minimum When corresponding α, β, γ, x0And y0, R (α, beta, gamma) now is the optimal solution of rotating vector.
According to the embodiment of the present disclosure, in operation S320, the rotating vector is to operate rotating vector determined by S310, The rotating vector R (α, beta, gamma) e.g. determined according to optimal solution.Based on the rotating vector, handle in described first image The characteristic point for holding corresponding part is interpreted as, in the first image described in the coordinate rotation of the characteristic point of the corresponding part of content Rotating vector, to compensate the characteristic point of the part that content is corresponding in the first image due to picture point position caused by the rotation of camera Move so that the characteristic point of the corresponding part of content is only translated and do not rotated in the first image and the second image.
According to the embodiment of the present disclosure, S330 is being operated, based on the characteristic point after processing, corresponding with the characteristic point second Characteristic point and acquisition parameter in image, determine the depth of the characteristic point of the corresponding part of content in the two field pictures Degree.
In characteristic point, the second image corresponding with the characteristic point after illustrating with reference to Fig. 5 A and Fig. 5 B based on processing Characteristic point and acquisition parameter, determine the side of the depth of the characteristic point of the corresponding part of content in the two field pictures Method.It should be noted that Fig. 5 A and Fig. 5 B are merely to illustrate the image processing method of the disclosure, but the disclosure is not limited to figure The embodiment that 5A and Fig. 5 B are illustrated.
Fig. 5 A are diagrammatically illustrated according to the embodiment of the present disclosure based on the characteristic point after processing and the characteristic point Characteristic point and acquisition parameter in corresponding second image, determine the spy of the corresponding part of content in the two field pictures Levy the flow chart of the depth of point.
As shown in Figure 5A, this method includes operation S510, S520, S530 and S540.
In operation S510, fisrt feature point is determined, wherein, the fisrt feature o'clock is in the first image at rotated vector Position of the position with the fisrt feature o'clock in the second image after reason is identical.
In operation S520, second feature point is determined, wherein, the second feature o'clock is in the first image at rotated vector Position of the position from the second feature o'clock in the second image after reason is different.
Operation S530, according to the acquisition parameter, it is determined that twice IMAQ when the camera change in location.
S540 is being operated, according to the position of fisrt feature point and second feature point in two field pictures, and the shooting The mobile distance of head and direction, determine depth of the second feature point in two field pictures.
Illustrate Fig. 5 A methods describeds with reference to Fig. 5 B embodiment.
Fig. 5 B are diagrammatically illustrated according to the embodiment of the present disclosure according to fisrt feature point and second feature o'clock in two frame figures Position as in, and the distance of camera movement and direction, determine depth of the second feature point in two field pictures The schematic diagram of the method for degree.
According to the embodiment of the present disclosure, in operation S510, fisrt feature point is determined, is set to S0, the fisrt feature point S0 Position of the position with the fisrt feature o'clock in the second image in one image after the method processing for operating S320 is identical, such as Shown in Fig. 5 B, the fisrt feature point S in the first image0Position in the second image is point S.
According to the embodiment of the present disclosure, operate as described above described in S510, in operation S520, determine second feature point.Such as Fig. 5 B It is shown, the second feature point T in the first image0Position in the second image is point T.
According to the embodiment of the present disclosure, in operation S530, according to the acquisition parameter, it is determined that twice IMAQ when described in take the photograph As the change in location of head, the acquisition parameter includes focal length, displacement s and the moving direction of camera, or is sat by position Change in location parameter determined by mark.
According to the embodiment of the present disclosure, in operation S540, the S points of fisrt feature point as shown in Figure 5 B, and second feature point is such as T points shown in Fig. 5 B, C1 are camera in the position at the 2nd moment, connection C1, C2 in the position at the first moment, C2 for camera 2 points and extend to S where depth meet at point D1, with T where depth meet at point D2.Picture point of the S points in the first image be S1, the picture point in the second image is S2, picture point of the T points in the first image is T1, the picture point in the second image is T2, it is necessary to note Meaning, Fig. 5 B are only to schematically show, C1C2 displacement and direction can be arbitrary values.In the situation that Fig. 5 B are illustrated Under, according to the position of fisrt feature point and second feature point in two field pictures, and the distance of camera movement and side To determining that the computational methods of depth of the second feature point in two field pictures are as described below.
As shown in Figure 5 B:
Due to △ S2DC2∽△SD1C;
△S1DC2∽△SD1C1;And
S1D=S1S2+S2D,
Therefore, picture depths of the fisrt feature point S in the second image can be obtained:
C2D1=C1C2*S1D/S1S2
Similarly, can be in the hope of picture depths of the second feature point T in the second image, second feature point T in the first figure Picture depth as in, and picture depths of the fisrt feature point S in the first image.
The process that this method calculates picture depth is relatively simple, while computational accuracy is ensured, can significantly reduce Amount of calculation.
Fig. 6 diagrammatically illustrates the flow chart of the image processing method according to another embodiment of the disclosure.
As shown in fig. 6, this method includes operation S210, S220 and S610.
In operation S610, based on the depth of the characteristic point in image, to feature points clustering, different depth in image is identified Object.According to the embodiment of the present disclosure, existing clustering method, such as K-means can be used, to each pixel in image Depth is clustered, it is believed that pixel similar in depth may represent an object, therefore can identify the object in image.
This method can identify the object of different depth in image, to be led applied to technologies such as extraction prospect, three-dimensional reconstructions Domain.
Fig. 7 diagrammatically illustrates the block diagram of the image processing system 700 according to the embodiment of the present disclosure.
As shown in fig. 7, image processing system 700 includes acquisition module 710 and processing module 720.
Acquisition module 710, such as the operation S210 above with reference to Fig. 2 descriptions is performed, for being obtained by single camera At least two field pictures at different moments, wherein, at least two field pictures at least part content is corresponding.
Processing module 720, such as the operation S220 above with reference to Fig. 2 descriptions is performed, for based on the collection two frames figure As when acquisition parameter, determine the depth of the characteristic point of the corresponding part of content in the two field pictures, wherein, it is described to adopt Collection parameter includes change in location parameter, when the change in location parameter is used to determine IMAQ twice, the position of the camera Put change.
Fig. 8 diagrammatically illustrates the block diagram of the processing module 720 according to the embodiment of the present disclosure.
As shown in figure 8, processing module 720 includes the first determination sub-module 810, processing submodule 820 and second determines son Module 830.
First determination sub-module 810, such as the operation S310 above with reference to Fig. 3 descriptions is performed, for based on described in collection Acquisition parameter during two field pictures, determine the collection direction and the collection direction of the second image of the first image in the two field pictures Between rotating vector, wherein, the acquisition parameter also includes the focal length of the camera.
Submodule 820 is handled, such as performs the operation S320 above with reference to Fig. 3 descriptions, for based on the rotating vector, Handle the characteristic point of the part that content is corresponding in described first image.
Second determination sub-module 830, such as the operation S330 above with reference to Fig. 3 descriptions is performed, after based on processing Characteristic point and acquisition parameter in characteristic point, the second image corresponding with the characteristic point, are determined in the two field pictures The depth of the characteristic point of the corresponding part of content.
According to the embodiment of the present disclosure, it is described based on gather the two field pictures when acquisition parameter, determine the two frames figure Rotating vector as between the collection direction of the first image and the collection direction of the second image is included from the two field pictures The multigroup corresponding characteristic point chosen, collection direction and the collection of the second image based on the first image in the two field pictures The characteristic point obtained in rotating vector processing described first image between direction after the characteristic point of the corresponding part of content with The common intersection point of the line of corresponding characteristic point in second image, determine the optimal solution of the rotating vector.
Fig. 9 diagrammatically illustrates the block diagram of the second determination sub-module 830 according to the embodiment of the present disclosure.
As shown in figure 9, the second determination sub-module 830 is true including the first determining unit 910, the second determining unit the 920, the 3rd Order member 930 and the 4th determination sub-module 940.
First determining unit 910, such as the operation S510 above with reference to Fig. 5 A descriptions is performed, for determining fisrt feature Point, wherein, position of the fisrt feature o'clock in the first image after rotated Vector Processing and the fisrt feature o'clock are the Position in two images is identical.
Second determining unit 920, such as the operation S520 above with reference to Fig. 5 descriptions is performed, for determining second feature point, Wherein, position of the second feature o'clock in the first image after rotated Vector Processing and the second feature o'clock are in the second figure Position as in is different.
3rd determining unit 930, such as the operation S530 above with reference to Fig. 5 descriptions is performed, for being joined according to the collection Number, it is determined that twice IMAQ when the camera change in location.
4th determination sub-module 840, such as the operation S540 above with reference to Fig. 5 descriptions is performed, for according to fisrt feature Point and position of the second feature point in two field pictures, and the distance of camera movement and direction, determine described second Depth of the characteristic point in two field pictures.
Figure 10 diagrammatically illustrates the block diagram of the image processing system 1000 according to another embodiment of the disclosure.
As shown in Figure 10, image processing system 1000 includes acquisition module 610, processing module 620 and identification module 1010.
Identification module 1010, such as the operation S610 above with reference to Fig. 6 descriptions is performed, for based on the characteristic point in image Depth, to feature points clustering, identify the object of different depth in image.
Realized it is understood that above-mentioned module may be incorporated in a module, or any one module therein Multiple modules can be split into.Or at least part function of one or more of these modules module can be with other At least part function phase of module combines, and is realized in a module.According to an embodiment of the invention, in above-mentioned module extremely Few one can at least be implemented partly as hardware circuit, such as field programmable gate array (FPGA), FPGA battle array (PLA), on-chip system, the system on substrate, the system in encapsulation, application specific integrated circuit (ASIC) are arranged, or can be with to circuit The hardware such as any other rational method that is integrated or encapsulating or firmware are carried out to realize, or with software, hardware and firmware three The appropriately combined of kind of implementation is realized.Or at least one in above-mentioned module can at least be implemented partly as counting Calculation machine program module, when the program is run by computer, the function of corresponding module can be performed.
Figure 11 diagrammatically illustrates the block diagram of electronic equipment 1100 in accordance with an embodiment of the present disclosure.
As shown in figure 11, electronic equipment 1100 includes processor 1110 and memory 1120.The electronic equipment 1100 can be with The method described above with reference to Fig. 2, Fig. 3, Fig. 5 A or Fig. 6 is performed, to realize the calculating of picture depth.
Specifically, processor 1110 can for example include general purpose microprocessor, instruction set processor and/or related chip group And/or special microprocessor (for example, application specific integrated circuit (ASIC)), etc..Processor 1110 can also include being used to cache The onboard storage device of purposes.Processor 510 can be performed for reference to figure 2, Fig. 3, Fig. 5 A or Fig. 6 description according to the disclosure Single treatment unit either multiple processing units of the different actions of the method flow of embodiment.
Memory 1120, such as can be the arbitrary medium that can include, store, transmit, propagate or transmit instruction.Example Such as, readable storage medium storing program for executing can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, device or propagation Medium.The specific example of readable storage medium storing program for executing includes:Magnetic memory apparatus, such as tape or hard disk (HDD);Light storage device, such as light Disk (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication link.
Memory 1120 can include computer program 1121, and the computer program 1121 can include code/computer Executable instruction, its when being performed by processor 1110 so that processor 1110 perform for example above in conjunction with Fig. 2, Fig. 3, Fig. 5 A or Method flow and its any deformation described by Fig. 6.
Computer program 1121 can be configured with such as computer program code including computer program module.Example Such as, in the exemplary embodiment, the code in computer program 1121 can include one or more program modules, such as including 1121A, module 1121B ....It should be noted that the dividing mode and number of module are not fixed, those skilled in the art It can be combined according to actual conditions using suitable program module or program module, when these program modules are combined by processor During 1110 execution so that processor 1110 can be performed for example above in conjunction with the method stream described by Fig. 2, Fig. 3, Fig. 5 A or Fig. 6 Journey and its any deformation.
According to an embodiment of the invention, at least one meter that can be implemented as describing with reference to figure 11 in module described above Calculation machine program module, it by processor 1110 when being performed, it is possible to achieve corresponding operating described above.
It will be understood by those skilled in the art that the feature described in each embodiment and/or claim of the disclosure can To carry out multiple combinations or/or combination, even if such combination or combination are not expressly recited in the disclosure.Especially, exist In the case of not departing from disclosure spirit or teaching, the feature described in each embodiment and/or claim of the disclosure can To carry out multiple combinations and/or combination.All these combinations and/or combination each fall within the scope of the present disclosure.
Although the disclosure, art technology has shown and described in the certain exemplary embodiments with reference to the disclosure Personnel it should be understood that without departing substantially from appended claims and its equivalent restriction spirit and scope of the present disclosure in the case of, A variety of changes in form and details can be carried out to the disclosure.Therefore, the scope of the present disclosure should not necessarily be limited by above-described embodiment, But not only should be determined by appended claims, also it is defined by the equivalent of appended claims.

Claims (10)

1. a kind of image processing method, including:
By at least two field pictures of single camera acquisition at different moments, wherein, in described at least two field pictures at least part Hold corresponding;And
Based on acquisition parameter when gathering the two field pictures, the spy of the corresponding part of content in the two field pictures is determined The depth of point is levied, wherein, the acquisition parameter includes change in location parameter, and the change in location parameter is used to determine image twice During collection, the change in location of the camera.
2. according to the method for claim 1, wherein, it is described based on gather the two field pictures when acquisition parameter, it is determined that The depth of the characteristic point of the corresponding part of content in the two field pictures includes:
Based on acquisition parameter when gathering the two field pictures, the collection direction of the first image in the two field pictures and the is determined Rotating vector between the collection direction of two images, wherein, the acquisition parameter also includes the focal length of the camera;
Based on the rotating vector, the characteristic point of the corresponding part of content in processing described first image;And
Based on the characteristic point after processing, characteristic point and acquisition parameter in the second image corresponding with the characteristic point, it is determined that The depth of the characteristic point of the corresponding part of content in the two field pictures.
3. according to the method for claim 2, wherein, it is described based on gather the two field pictures when acquisition parameter, it is determined that Rotating vector in the two field pictures between the collection direction of the first image and the collection direction of the second image includes:
The multigroup corresponding characteristic point chosen from the two field pictures, the collection based on the first image in the two field pictures The spy for the part that content is corresponding in rotating vector processing described first image between direction and the collection direction of the second image The common intersection point of line of the characteristic point obtained after point the characteristic point corresponding with second image is levied, determines the rotating vector Optimal solution.
4. according to the method for claim 2, wherein, it is the characteristic point based on after processing, corresponding with the characteristic point Characteristic point and acquisition parameter in second image, determine the characteristic point of the corresponding part of content in the two field pictures Depth includes:
Fisrt feature point is determined, wherein, position and institute of the fisrt feature o'clock in the first image after rotated Vector Processing It is identical to state position of the fisrt feature o'clock in the second image;
Second feature point is determined, wherein, position and institute of the second feature o'clock in the first image after rotated Vector Processing It is different to state position of the second feature o'clock in the second image;
According to the acquisition parameter, it is determined that twice IMAQ when the camera change in location;And
According to the position of fisrt feature point and second feature point in two field pictures, and the distance of camera movement and side To determining depth of the second feature point in two field pictures.
5. the method according to claim 11, in addition to:
Based on the depth of the characteristic point in image, to feature points clustering, the object of different depth in image is identified.
6. a kind of electronic equipment, including:
Processor;And
Memory, computer-readable instruction is stored thereon with, when the instruction is executed by processor so that processor:
By at least two field pictures of single camera acquisition at different moments, wherein, in described at least two field pictures at least part Hold corresponding;And
Based on acquisition parameter when gathering the two field pictures, the spy of the corresponding part of content in the two field pictures is determined The depth of point is levied, wherein, the acquisition parameter includes change in location parameter, and the change in location parameter is used to determine image twice During collection, the change in location of the camera.
7. electronic equipment according to claim 6, wherein, the collection when processor is based on the collection two field pictures Parameter, determining the depth of the characteristic point of the corresponding part of the content in the two field pictures includes:
Based on acquisition parameter when gathering the two field pictures, the collection direction of the first image in the two field pictures and the is determined Rotating vector between the collection direction of two images, wherein, the acquisition parameter also includes the focal length of the camera;
Based on the rotating vector, the characteristic point of the corresponding part of content in processing described first image;And
Based on the characteristic point after processing, characteristic point and acquisition parameter in the second image corresponding with the characteristic point, it is determined that The depth of the characteristic point of the corresponding part of content in the two field pictures.
8. electronic equipment according to claim 7, wherein, the collection when processor is based on the collection two field pictures Parameter, determine the rotating vector bag between the collection direction of the first image and the collection direction of the second image in the two field pictures Include:
The multigroup corresponding characteristic point chosen from the two field pictures, the collection based on the first image in the two field pictures The spy for the part that content is corresponding in rotating vector processing described first image between direction and the collection direction of the second image The common intersection point of line of the characteristic point obtained after point the characteristic point corresponding with second image is levied, determines the rotating vector Optimal solution.
9. electronic equipment according to claim 7, wherein, the processor is based on the characteristic point after processing and the spy Characteristic point and acquisition parameter corresponding to sign point in the second image, determine the corresponding part of content in the two field pictures The depth of characteristic point include:
Fisrt feature point is determined, wherein, position and institute of the fisrt feature o'clock in the first image after rotated Vector Processing It is identical to state position of the fisrt feature o'clock in the second image;
Second feature point is determined, wherein, position and institute of the second feature o'clock in the first image after rotated Vector Processing It is different to state position of the second feature o'clock in the second image;
According to the acquisition parameter, it is determined that twice IMAQ when the camera change in location;And
According to the position of fisrt feature point and second feature point in two field pictures, and the distance of camera movement and side To determining depth of the second feature point in two field pictures.
10. a kind of image processing system, including:
Acquisition module, at least two field pictures by the acquisition of single camera at different moments, wherein, at least two frame figures As at least part content is corresponding;And
Processing module, for based on gather the two field pictures when acquisition parameter, determine the content phase in the two field pictures The depth of the characteristic point of corresponding part, wherein, the acquisition parameter includes change in location parameter, and the change in location parameter is used In it is determined that during IMAQ twice, the change in location of the camera.
CN201710903510.3A 2017-09-28 2017-09-28 Image processing method, electronic device and image processing system Active CN107749069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710903510.3A CN107749069B (en) 2017-09-28 2017-09-28 Image processing method, electronic device and image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710903510.3A CN107749069B (en) 2017-09-28 2017-09-28 Image processing method, electronic device and image processing system

Publications (2)

Publication Number Publication Date
CN107749069A true CN107749069A (en) 2018-03-02
CN107749069B CN107749069B (en) 2020-05-26

Family

ID=61255883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710903510.3A Active CN107749069B (en) 2017-09-28 2017-09-28 Image processing method, electronic device and image processing system

Country Status (1)

Country Link
CN (1) CN107749069B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020019175A1 (en) * 2018-07-24 2020-01-30 深圳市大疆创新科技有限公司 Image processing method and apparatus, and photographing device and unmanned aerial vehicle
CN112132902A (en) * 2019-06-24 2020-12-25 上海安亭地平线智能交通技术有限公司 Vehicle-mounted camera external parameter adjusting method and device, electronic equipment and medium
WO2021238163A1 (en) * 2020-05-28 2021-12-02 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN114422736A (en) * 2022-03-28 2022-04-29 荣耀终端有限公司 Video processing method, electronic equipment and computer storage medium
CN114463401A (en) * 2020-11-09 2022-05-10 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102168954A (en) * 2011-01-14 2011-08-31 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
CN105376484A (en) * 2015-11-04 2016-03-02 深圳市金立通信设备有限公司 Image processing method and terminal
WO2016062996A1 (en) * 2014-10-20 2016-04-28 Bae Systems Plc Apparatus and method for multi-camera visual odometry
US20170019655A1 (en) * 2015-07-13 2017-01-19 Texas Insturments Incorporated Three-dimensional dense structure from motion with stereo vision
CN107025666A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Depth detection method and device and electronic installation based on single camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102168954A (en) * 2011-01-14 2011-08-31 浙江大学 Monocular-camera-based method for measuring depth, depth field and sizes of objects
WO2016062996A1 (en) * 2014-10-20 2016-04-28 Bae Systems Plc Apparatus and method for multi-camera visual odometry
US20170019655A1 (en) * 2015-07-13 2017-01-19 Texas Insturments Incorporated Three-dimensional dense structure from motion with stereo vision
CN105376484A (en) * 2015-11-04 2016-03-02 深圳市金立通信设备有限公司 Image processing method and terminal
CN107025666A (en) * 2017-03-09 2017-08-08 广东欧珀移动通信有限公司 Depth detection method and device and electronic installation based on single camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙鹏飞等: "单目多角度空间点坐标测量方法", 《仪器仪表学报》 *
许凌羽: "视觉坐标测量机仿真模型的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020019175A1 (en) * 2018-07-24 2020-01-30 深圳市大疆创新科技有限公司 Image processing method and apparatus, and photographing device and unmanned aerial vehicle
CN110800023A (en) * 2018-07-24 2020-02-14 深圳市大疆创新科技有限公司 Image processing method and equipment, camera device and unmanned aerial vehicle
CN112132902A (en) * 2019-06-24 2020-12-25 上海安亭地平线智能交通技术有限公司 Vehicle-mounted camera external parameter adjusting method and device, electronic equipment and medium
CN112132902B (en) * 2019-06-24 2024-01-16 上海安亭地平线智能交通技术有限公司 Vehicle-mounted camera external parameter adjusting method and device, electronic equipment and medium
WO2021238163A1 (en) * 2020-05-28 2021-12-02 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN114463401A (en) * 2020-11-09 2022-05-10 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN114422736A (en) * 2022-03-28 2022-04-29 荣耀终端有限公司 Video processing method, electronic equipment and computer storage medium
CN114422736B (en) * 2022-03-28 2022-08-16 荣耀终端有限公司 Video processing method, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN107749069B (en) 2020-05-26

Similar Documents

Publication Publication Date Title
CN107749069A (en) Image processing method, electronic equipment and image processing system
CN110246147B (en) Visual inertial odometer method, visual inertial odometer device and mobile equipment
EP3028252B1 (en) Rolling sequential bundle adjustment
Honegger et al. Real-time and low latency embedded computer vision hardware based on a combination of FPGA and mobile CPU
US20210133920A1 (en) Method and apparatus for restoring image
US20180262685A1 (en) Apparatus and methods for image alignment
JP6590792B2 (en) Method, apparatus and display system for correcting 3D video
CN111354042A (en) Method and device for extracting features of robot visual image, robot and medium
US11620757B2 (en) Dense optical flow processing in a computer vision system
WO2019104571A1 (en) Image processing method and device
JP2020506487A (en) Apparatus and method for obtaining depth information from a scene
US20180181816A1 (en) Handling Perspective Magnification in Optical Flow Proessing
Honegger et al. Embedded real-time multi-baseline stereo
CN105989603A (en) Machine vision image sensor calibration
US11042984B2 (en) Systems and methods for providing image depth information
US20150310620A1 (en) Structured stereo
WO2023005457A1 (en) Pose calculation method and apparatus, electronic device, and readable storage medium
US11682212B2 (en) Hierarchical data organization for dense optical flow processing in a computer vision system
AliAkbarpour et al. Parallax-tolerant aerial image georegistration and efficient camera pose refinement—without piecewise homographies
US8509522B2 (en) Camera translation using rotation from device
CN110062165A (en) Method for processing video frequency, device and the electronic equipment of electronic equipment
CN109658507A (en) Information processing method and device, electronic equipment
JP6154759B2 (en) Camera parameter estimation apparatus, camera parameter estimation method, and camera parameter estimation program
JP6080424B2 (en) Corresponding point search device, program thereof, and camera parameter estimation device
CN113628284A (en) Pose calibration data set generation method, device and system, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant