WO2019047245A1 - Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur - Google Patents

Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2019047245A1
WO2019047245A1 PCT/CN2017/101309 CN2017101309W WO2019047245A1 WO 2019047245 A1 WO2019047245 A1 WO 2019047245A1 CN 2017101309 W CN2017101309 W CN 2017101309W WO 2019047245 A1 WO2019047245 A1 WO 2019047245A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
point
determining
processing method
Prior art date
Application number
PCT/CN2017/101309
Other languages
English (en)
Chinese (zh)
Inventor
谢俊
陈爽新
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to PCT/CN2017/101309 priority Critical patent/WO2019047245A1/fr
Priority to CN201780091703.1A priority patent/CN110741633A/zh
Publication of WO2019047245A1 publication Critical patent/WO2019047245A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present invention relates to image processing techniques, and more particularly to an image processing method, an electronic device, and a computer readable storage medium.
  • the left-eye image and the right-eye image of the 3D image may have a certain deviation, for example, the left-eye image is acquired by a single camera and the right-eye image is acquired after the translation, and the vertical direction may appear during the translation of the camera or The deflection occurs, so the left eye image and the right eye image may not be able to properly synthesize the 3D image. Therefore, how to adjust the left-eye image and the right-eye image so that the left-eye image and the right-eye image can properly synthesize the 3D image becomes a technical problem to be solved.
  • Embodiments of the present invention provide an image processing method, an electronic device, and a computer readable storage medium.
  • the present invention provides an image processing method for an electronic device, and the image processing method includes:
  • the second image is adjusted according to the position adjustment amount.
  • the present invention provides an electronic device including a processor, the processor for:
  • the second image is adjusted according to the position adjustment amount.
  • the invention provides an electronic device comprising:
  • One or more processors are One or more processors;
  • One or more programs wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, when the program is executed by the processor The steps of the image processing method.
  • the present invention provides a computer readable storage medium, characterized in that the computer readable storage medium stores an image processing program, the image processing program being executed by at least one processor to complete the steps of the image processing method.
  • An image processing method, an electronic device, and a computer readable storage medium determine a position adjustment amount according to coordinate points of a first feature point of a first image and a second feature point of a second image, and adjust a second according to the position adjustment amount.
  • the image is such that the first image and the second image are suitable for synthesizing the 3D image, thereby facilitating the synthesis of the first image and the second image into a 3D image.
  • FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of a first embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of a first embodiment of an electronic device according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of a second embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic block diagram of a second embodiment of an electronic device according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart diagram of a third embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart diagram of a fourth embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 9 is a schematic flow chart of a fifth embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart diagram of a sixth embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 11 is a schematic flow chart of a seventh embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 12 is a schematic flowchart diagram of an eighth embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 13 is a schematic flowchart diagram of a ninth embodiment of an image processing method according to an embodiment of the present invention.
  • FIG. 14 is a scene view of a first embodiment of a first image and a second image according to an embodiment of the present invention
  • Figure 15 is a scene view of a second embodiment of the first image and the second image of the embodiment of the present invention.
  • 16 is a scene view of a third embodiment of a first image and a second image according to an embodiment of the present invention.
  • FIG. 17 is a schematic block diagram of a third embodiment of an electronic device according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram showing the connection of an electronic device and a computer readable storage medium according to an embodiment of the present invention.
  • the electronic device 100, the processor 10, the camera 20, the memory 30, and the computer readable storage medium 500 The electronic device 100, the processor 10, the camera 20, the memory 30, and the computer readable storage medium 500.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless specifically and specifically defined otherwise.
  • connection In the description of the present invention, it should be noted that the terms “installation”, “connected”, and “connected” are to be understood broadly, and may be fixed or detachable, for example, unless otherwise explicitly defined and defined. Connected, or integrally connected; may be mechanically connected, or may be electrically connected or may communicate with each other; may be directly connected or indirectly connected through an intermediate medium, may be internal communication of two elements or interaction of two elements relationship. For those skilled in the art, the specific meanings of the above terms in the present invention can be understood on a case-by-case basis.
  • Image processing methods include:
  • Step S112 acquiring a first image and a second image, wherein the first image has a certain degree of similarity with the second image;
  • Step S114 determining a first feature point of the first image, and searching for a second feature point corresponding to the first feature point in the second image;
  • Step S116 establishing a coordinate system for the first image and the second image, and determining initial coordinate points of the first feature point and the second feature point;
  • Step S118 adjusting the position of the second feature point according to a preset manner
  • Step S122 determining an adjustment coordinate point after the second feature point position adjustment
  • Step S124 Calculating a relative distance between the first feature point and the second feature point according to the initial coordinate point of the first feature point and the adjusted coordinate point of the second feature point;
  • Step S126 determining a position adjustment amount of the second feature point when a relative distance between the first feature point and the second feature point is minimum;
  • Step S128 Adjust the second image according to the position adjustment amount.
  • the electronic device 100 of the embodiment of the present invention includes a processor 10.
  • the processor 10 is configured to:
  • the second image is adjusted according to the position adjustment amount.
  • the image processing method according to the embodiment of the present invention can be realized by the electronic device 100 of the embodiment of the present invention.
  • the image processing method and the electronic device 100 of the embodiment of the present invention are based on the first feature point and the second image of the first image.
  • the coordinate point of the second feature point of the image determines the position adjustment amount and adjusts the second image according to the position adjustment amount, thereby making the first image and the second image suitable for synthesizing the 3D image, thereby facilitating synthesizing the first image and the second image into 3D image.
  • the electronic device 100 includes, but is not limited to, a mobile phone, a computer, a camera, and the like.
  • the first image has a certain degree of similarity to the second image.
  • the first image and the second image may refer to a left eye image and a right eye image of the 3D image, wherein the first image It may refer to one of a left eye image and a right eye image, and the second image may refer to another of a left eye image and a right eye image.
  • the first image and the second image are paired with each other, for example, the first image refers to a left eye image of a 3D image, and the second image refers to a right eye image of the 3D image.
  • the first image and the second image are different images of the same object (like a person, the same object, etc.).
  • the similar feature points of the first image and the second image may be determined whether the similar feature points of the first image and the second image exceed a preset ratio.
  • the preset ratio may be preset in the electronic device 100 or set according to user requirements. For example, the preset ratio may be 80%, 90%, etc., and is not specifically limited herein.
  • the image processing method is to adjust the position of the second image such that the first image and the second image are suitable for synthesizing the 3D image. It can be understood that, in other embodiments, the image processing method may adjust the first image before or after adjusting the second image, and the method of adjusting the first image may be performed by adjusting the second image. This is not specifically limited.
  • the first image and the second image are stored in the electronic device 100, such that the electronic device 100 can directly read the first image and the second image and use the image processing method of the embodiment of the present invention to The image and the second image are processed.
  • the electronic device 100 includes a communication module, and the communication module may be a WiFi module that communicates with the cloud or a Bluetooth module that communicates with other electronic devices 100, and the electronic device 100 obtains the first image and the second image through the communication module. .
  • the electronic device 100 includes two cameras 20, and step S112 includes:
  • Step S1122 Control the two cameras 20 to acquire the first image and the second image, respectively.
  • the processor 10 controls the two cameras 20 to acquire the first image and the second image, respectively.
  • the electronic device 100 can acquire the first image and the second image through the two cameras 20, respectively.
  • the position of the two cameras 20 of the electronic device 100 may be deviated due to a lack of technology or a large error when the camera 20 is mounted.
  • the first image and the second image acquired by the two cameras 20 cannot be combined into a qualified 3D image. Therefore, the acquired first image and the second image can be adjusted by using the image processing method, so that the synthesized image can be synthesized. 3D image.
  • the first image and the second image are acquired by the same pair of cameras Different images of images (like a person, the same object, etc.).
  • the electronic device 100 includes a single camera 20, and the step S112 specifically includes:
  • Step S1124 Control the camera 20 to acquire the first image
  • Step S1126 Acquire a second image after the position of the camera 20 is changed.
  • the camera 20 acquires the first image and the second image at two different locations, respectively, that is, the camera 20 acquires the first image at the first location and acquires the second image at a second location that is different from the first location.
  • the processor 10 controls the camera 20 to acquire a first image, and acquires a second image after a change in the position of the camera 20.
  • steps S1124 and S1126 may be implemented by the processor 10.
  • the electronic device 100 can acquire the first image and the second image through a single camera 20.
  • the electronic device 100 has only one camera 20. Therefore, the electronic device 100 can acquire the first image at the first position by controlling the camera 20 and acquire the second image at the second position, so that the first The image and the second image are combined into a 3D image. Since the changed position of the camera 20 may be unsatisfactory, such as an excessively large angle of movement or rotation, the first image and the second image acquired by the single camera 20 of the electronic device 100 cannot synthesize a qualified 3D image, and thus the image can be utilized.
  • the processing method that is, steps S114, S116, S118, S122, S124, S126, and S128 adjusts the second image such that the first image and the second image can synthesize a qualified 3D image.
  • the first image and the second image are different images of the same object (like a person, the same object, etc.) acquired by the same camera 20 at different locations.
  • the first feature point corresponding to the second feature point may refer to a similarity between the first feature point and the second feature point.
  • Step S114 may search for a second feature point corresponding to the first feature point in the second image by using a feature matching algorithm with rotation invariance such as a sift (scale invariant feature change) algorithm or an surf (acceleration robust feature) algorithm.
  • the Sift algorithm is an algorithm for detecting local features in an image by finding the extreme points of the image in a spatial scale and extracting its position, scale, and rotation invariants.
  • Surf algorithm is a highly robust local feature point detection algorithm, which is improved by sift algorithm and can be applied to computer vision object recognition and 3D reconstruction. Obtaining a first feature point in the first image, and searching a second feature point corresponding to the first feature point in the second image by using the first feature point acquired by the first image, thereby obtaining the first feature paired with each other Point and second feature point.
  • the feature points may refer to global feature points, such as color features, texture features, shapes of main objects, etc., or local feature points, such as spots and corner.
  • Spots can refer to areas that differ in color and grayscale from the surroundings, such as a wild flower or an insect in a weed.
  • a corner point can refer to a corner of an object in an image or an intersection between lines. The above sift algorithm and surf algorithm can be used to detect the graph. The spot in the image (the extreme point of the above image) is used as the feature point.
  • the eye corner of the human eye of the first image may be acquired as the first feature point, and then according to the feature of the eye corner of the human eye of the first image (eg, pixel value, The color, etc.) finds a second feature point corresponding to the corner of the eye of the human eye in the second image, thereby obtaining the first feature point and the second feature point that are paired with each other.
  • the feature of the eye corner of the human eye of the first image eg, pixel value, The color, etc.
  • step S118 is specifically:
  • Step S1182 Rotating the second feature point in a preset direction to change the position of the second feature point.
  • the processor 10 rotates the second feature point in a predetermined direction and changes the position of the second feature point.
  • the position of the second feature point can be changed by rotating the second feature point, and thus the position adjustment amount of the second feature point can be obtained, thereby adjusting the second image according to the position adjustment amount.
  • the position adjustment amount may be determined according to the position of the second feature point after the rotation by rotating the position of the second feature point in the preset direction, and then according to The position adjustment amount of the second feature point adjusts the second image.
  • the preset direction may be a clockwise direction or a counterclockwise direction. It should be noted that, in the embodiment, when the image processing method only adjusts the position of the second image, the preset direction may be any one of a clockwise direction and a counterclockwise direction.
  • the image processing method may simultaneously adjust the positions of the first image and the second image, and the direction of rotating the second image may be any one of a clockwise direction and a counterclockwise direction, and the first image is rotated accordingly.
  • the direction is the other of the clockwise and counterclockwise directions. In this way, the positions of the first image and the second image are simultaneously adjusted to shorten the adjustment time, and the efficiency of the image processing method is improved.
  • step S118 is specifically:
  • Step S1184 moving the second feature point upward or downward in the vertical direction to change the position of the second feature point.
  • the processor 10 moves the second feature point up or down in the vertical direction to change the position of the second feature point.
  • the second feature point can be moved upward or downward in the vertical direction to change the position of the second feature point, so that the position adjustment amount of the second feature point is obtained, thereby adjusting the position according to the second feature point. Adjust the second image.
  • the second feature point when there is a positional deviation between the first image and the second image, the second feature point may be moved upward or downward in the vertical direction, and the position of the second feature point may be changed according to the second after the movement.
  • the position of the feature point is used to determine the position adjustment amount of the second feature point, and the second image is adjusted according to the position adjustment amount of the second feature point.
  • the first image and the second image may be adjusted through step S1182 and subsequent image processing methods.
  • the first image and the second image can be implemented through step S1184 and subsequent image processing methods.
  • the image is adjusted to be capable of synthesizing a qualified 3D image; in some embodiments, there is both a rotational deviation and a translational offset between the first image and the second image
  • the first image and the second image need to be adjusted to be capable of synthesizing the 3D image through steps S1182, S1184, and subsequent image processing methods.
  • the rotation deviation may be eliminated first through step S1182
  • the translation deviation may be eliminated through step S1184, and the translation deviation may also be eliminated through step S1184.
  • S1182 eliminates the rotation deviation. Therefore, the second feature point can be adjusted according to actual conditions.
  • step S124 includes:
  • Step S1242 Calculate a variance of a difference between an initial coordinate point of the first feature point and an ordinate of the adjusted coordinate point of the second feature point.
  • the processor 10 calculates a variance of a difference between an initial coordinate point of the first feature point and an ordinate of the adjusted coordinate point of the second feature point.
  • the relative distance between the first feature point and the second feature point can be calculated by the variance of the difference between the initial coordinate point of the first feature point and the ordinate of the second feature point adjusted coordinate point.
  • step S126 includes:
  • Step S1262 When the variance of the difference of the ordinate is the smallest, determine that the relative distance between the first feature point and the second feature point is the smallest and determine that the second feature point is adjusted to the target position at this time;
  • Step S1264 determining a target coordinate point when the second feature point is adjusted to the target position
  • Step S1266 Calculate the position adjustment amount of the second feature point according to the initial coordinate point of the second feature point and the target coordinate point.
  • the position adjustment amount is at least one of a rotation angle and a movement distance.
  • the processor 10 is configured to:
  • the position adjustment amount of the second feature point is calculated according to the initial coordinate point of the second feature point and the target coordinate point.
  • the position adjustment amount is at least one of a rotation angle and a movement distance.
  • the variance of the difference of the ordinates is the smallest, it can be determined that the relative distance between the first feature point and the second feature point is the smallest, indicating that the second feature point has been adjusted to the target position, and the second feature can be calculated at this time.
  • the target coordinate point at the target position is calculated, and the position adjustment amount of the second feature point is calculated according to the initial coordinate point and the target coordinate point of the second feature point.
  • the obtained position adjustment amount should be a rotation angle;
  • the position adjustment amount is at least one of a rotation angle and a movement distance. It can be understood that the position adjustment amount may be a rotation angle, a movement distance, and a rotation angle and a movement distance.
  • the position of the second feature point may be adjusted by a preset rotation frequency, for example, every 1 degree, and the difference between the ordinate of the second feature point and the first feature point after each rotation may be calculated.
  • the variance of the values such that the second feature point is adjusted to the target position when the variance of the difference of the ordinates of the first feature point and the second feature point is the smallest.
  • the variance of the difference between the second feature point after the translation and the ordinate of the first feature point can be calculated by setting the translation direction (up or down) and moving the second feature point in the translation direction. Increase or decrease, if the variance increases, it can move in the opposite direction of the translation direction. If the variance is reduced, the movement can be continued in the translation direction and the difference between the ordinate of the first feature point and the second feature point. When the variance is the smallest, it is judged that the second feature point is adjusted to the target position.
  • step S128 is specifically:
  • Step S1282 Control the second image to rotate the rotation angle around the center of the second image in a preset direction to adjust the second image to the target position.
  • processor 10 is configured to:
  • the second image is controlled to rotate a rotation angle about a center of the second image in a preset direction to adjust the second image to the target position.
  • the second image can be rotated according to the angle of rotation to adjust the second image to the target position.
  • step S1282 when the position of the second feature point is rotated in a preset direction, if the center of the second image is rotated as the rotation center, step S1282 may rotate the second image at the center of the second image.
  • the origin of the coordinate system may be used as the center of rotation, or any one of the second images may be the center of rotation, etc., while the second image is rotated. It is necessary to adopt a rotation center when rotating with the second feature point, and is not specifically limited herein.
  • the rotation direction of the second image may be consistent with the rotation direction of the second feature point. For example, the rotation direction of the second feature point is clockwise, and the rotation direction of the second image is also clockwise.
  • step S128 is specifically:
  • Step S1284 Control the second image to move the moving distance upward or downward in the vertical direction to adjust the second image to the target position.
  • processor 10 is configured to:
  • the second image is controlled to move the moving distance up or down in the vertical direction to adjust the second image to the target position.
  • the second image can be translated according to the moving distance to adjust the second image to the target position.
  • step S1284 may be to move the second image upward. Understand that in other implementations In the mode, when the second feature point moves, the second feature point may be moved downward, and the second image may also move downward when moving.
  • step S116 includes:
  • Step S1162 The first image and the second image are established in the same coordinate system.
  • processor 10 is configured to:
  • the first image and the second image are built in the same coordinate system.
  • the coordinate points of the feature points of the first image and the second image can be simultaneously represented by the same coordinate system.
  • the center point of the first image may be taken as an origin, and the direction of the center point of the first image pointing to the center point of the second image is the X axis.
  • the direction establishes a coordinate system in the Y-axis direction in a direction perpendicular to the X-axis direction, so that the first image and the second image can be represented in the same coordinate system, and therefore, the first image and the second image can be quickly represented by the coordinate system
  • the coordinate point of the feature point of the image may be taken as an origin, and the direction of the center point of the first image pointing to the center point of the second image is the X axis.
  • the direction establishes a coordinate system in the Y-axis direction in a direction perpendicular to the X-axis direction, so that the first image and the second image can be represented in the same coordinate system, and therefore, the first image and the second image can be quickly represented by the coordinate system
  • the coordinate point of the feature point of the image may be taken as an origin,
  • the image on the right side may be the second image
  • the image on the left side may be the first image
  • the fisheye, fishtail, and the like may be regarded as feature points of the image.
  • the first image and the second image can normally synthesize the 3D image
  • the first image and the second image only appear as left and right offsets, for example, in the same coordinate system
  • the fisheye in the first image and the second image The fisheye has the same abscissa and the same ordinate.
  • a deviation in the image taken by the user For example, as shown in FIG. 15, it can be seen that there is a rotation deviation between the fish of the second image on the right side and the fish of the first image on the left side.
  • the image processing method of the embodiment of the invention calculates the rotation angle and rotates the second image, and the rotated second image may directly become the second image shown in FIG. 14 (only there is a rotation deviation), and may also become the one shown in FIG. The second image on the right (there are rotational and translational deviations).
  • the image processing method according to the embodiment of the present invention calculates the moving distance and translates the second image shown in FIG.
  • the first image and the second image capable of normally synthesizing the 3D image shown in FIG. 14 can be obtained.
  • an electronic device 100 includes one or more processors 10, a memory 30, and one or more programs.
  • One or more of the programs are stored in memory 30 and are configured to be executed by one or more processors 10.
  • the steps of the image processing method in any of the above embodiments are implemented when the program is executed by the processor 10.
  • a computer readable storage medium 500 stores an image processing program.
  • the image processing program is executed by at least one processor 10, the steps of the image processing method of any of the above embodiments of the present invention are completed.
  • the computer readable storage medium 500 may be a storage medium built in the electronic device 100 or a storage medium that is pluggably inserted into the electronic device 100.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un procédé de traitement d'images pour un dispositif électronique (100). Le procédé de traitement d'image comprend les étapes suivantes : (S112) acquérir une première image et une seconde image, la première image et la seconde image ayant un certain degré de similarité; (S114) déterminer un premier point caractéristique de la première image, et rechercher dans la seconde image un second point caractéristique correspondant au premier point caractéristique; (S116) établir un système de coordonnées pour la première image et la seconde image, et déterminer des coordonnées initiales du premier point caractéristique et du second point caractéristique; (S118) ajuster la position du second point caractéristique selon une manière prédéfinie; (S122) déterminer les coordonnées ajustées du second point caractéristique après l'ajustement de position; (S124) calculer la distance relative entre le premier point caractéristique et le second point caractéristique en fonction du point de coordonnées initial du premier point caractéristique et du point de coordonnées ajusté du second point caractéristique; (S126) déterminer une valeur d'ajustement de position du second point caractéristique lorsque la distance relative entre le premier point caractéristique et le second point caractéristique est la plus petite; (S128) ajuster la seconde image selon la valeur d'ajustement de position. La présente invention concerne également un dispositif électronique (100).
PCT/CN2017/101309 2017-09-11 2017-09-11 Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur WO2019047245A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/101309 WO2019047245A1 (fr) 2017-09-11 2017-09-11 Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur
CN201780091703.1A CN110741633A (zh) 2017-09-11 2017-09-11 图像处理方法、电子装置和计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/101309 WO2019047245A1 (fr) 2017-09-11 2017-09-11 Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2019047245A1 true WO2019047245A1 (fr) 2019-03-14

Family

ID=65633588

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/101309 WO2019047245A1 (fr) 2017-09-11 2017-09-11 Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN110741633A (fr)
WO (1) WO2019047245A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115678B (zh) * 2021-11-30 2023-06-27 深圳市锐尔觅移动通信有限公司 内容显示控制方法及相关装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010004465A (ja) * 2008-06-23 2010-01-07 Fujinon Corp 立体画像撮影システム
CN102208116A (zh) * 2010-03-29 2011-10-05 卡西欧计算机株式会社 三维建模装置以及三维建模方法
CN102567995A (zh) * 2012-01-04 2012-07-11 朱经纬 图像配准方法
CN104081435A (zh) * 2014-04-29 2014-10-01 中国科学院自动化研究所 一种基于级联二值编码的图像匹配方法
CN105635719A (zh) * 2014-11-20 2016-06-01 三星电子株式会社 用于校准图像的方法和设备
CN106327482A (zh) * 2016-08-10 2017-01-11 东方网力科技股份有限公司 一种基于大数据的面部表情的重建方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011253376A (ja) * 2010-06-02 2011-12-15 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
CN102170576A (zh) * 2011-01-30 2011-08-31 中兴通讯股份有限公司 双摄像头立体拍摄的处理方法及装置
TWI486052B (zh) * 2011-07-05 2015-05-21 Realtek Semiconductor Corp 立體影像處理裝置以及立體影像處理方法
CN102905147A (zh) * 2012-09-03 2013-01-30 上海立体数码科技发展有限公司 立体图像校正方法及装置
CN105812766B (zh) * 2016-03-14 2017-07-04 吉林大学 一种垂直视差消减方法
CN106534833B (zh) * 2016-12-07 2018-08-07 上海大学 一种联合空间时间轴的双视点立体视频稳定方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010004465A (ja) * 2008-06-23 2010-01-07 Fujinon Corp 立体画像撮影システム
CN102208116A (zh) * 2010-03-29 2011-10-05 卡西欧计算机株式会社 三维建模装置以及三维建模方法
CN102567995A (zh) * 2012-01-04 2012-07-11 朱经纬 图像配准方法
CN104081435A (zh) * 2014-04-29 2014-10-01 中国科学院自动化研究所 一种基于级联二值编码的图像匹配方法
CN105635719A (zh) * 2014-11-20 2016-06-01 三星电子株式会社 用于校准图像的方法和设备
CN106327482A (zh) * 2016-08-10 2017-01-11 东方网力科技股份有限公司 一种基于大数据的面部表情的重建方法及装置

Also Published As

Publication number Publication date
CN110741633A (zh) 2020-01-31

Similar Documents

Publication Publication Date Title
CN112689135B (zh) 投影校正方法、装置、存储介质及电子设备
US9667862B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
KR101775591B1 (ko) 데이터베이스 생성의 목적을 위한 대화식 및 자동 3-d 오브젝트 스캐닝 방법
CA3145736A1 (fr) Procede et systeme de generation d'images
JP5593177B2 (ja) 点群位置データ処理装置、点群位置データ処理方法、点群位置データ処理システム、および点群位置データ処理プログラム
TWI520098B (zh) 影像擷取裝置及其影像形變偵測方法
WO2012053521A1 (fr) Dispositif de traitement d'informations optiques, procédé de traitement d'informations optiques, système de traitement d'informations optiques et programme de traitement d'informations optiques
JP2014529727A (ja) 自動シーン較正
WO2019037088A1 (fr) Procédé et dispositif de réglage d'exposition, et véhicule aérien sans pilote
CN111195897B (zh) 用于机械手臂系统的校正方法及装置
KR102398478B1 (ko) 전자 디바이스 상에서의 환경 맵핑을 위한 피쳐 데이터 관리
TWI584051B (zh) Three - dimensional environment system of vehicle and its method
KR20180040336A (ko) 카메라 시스템 및 이의 객체 인식 방법
JP5672112B2 (ja) ステレオ画像較正方法、ステレオ画像較正装置及びステレオ画像較正用コンピュータプログラム
JP7107166B2 (ja) 床面検出プログラム、床面検出方法及び端末装置
WO2019075948A1 (fr) Procédé d'estimation de posture pour robot mobile
WO2023087894A1 (fr) Procédé et appareil de réglage de région, ainsi que caméra et support d'enregistrement
CN104392450A (zh) 确定相机焦距与旋转角度的方法、相机标定方法及系统
WO2021189804A1 (fr) Procédé et dispositif de rectification d'image, et système électronique
KR101853269B1 (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 장치
WO2020181506A1 (fr) Procédé, appareil et système de traitement d'image
JP2020047049A (ja) 画像処理装置及び画像処理方法
JP2006234703A (ja) 画像処理装置及び三次元計測装置並びに画像処理装置用プログラム
WO2019189381A1 (fr) Corps mobile, dispositif de commande, et programme de commande
WO2019047245A1 (fr) Procédé de traitement d'images, dispositif électronique et support de stockage lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17924702

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17924702

Country of ref document: EP

Kind code of ref document: A1