CN107742275B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN107742275B
CN107742275B CN201711099435.6A CN201711099435A CN107742275B CN 107742275 B CN107742275 B CN 107742275B CN 201711099435 A CN201711099435 A CN 201711099435A CN 107742275 B CN107742275 B CN 107742275B
Authority
CN
China
Prior art keywords
target image
edge
image
position information
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711099435.6A
Other languages
Chinese (zh)
Other versions
CN107742275A (en
Inventor
付助荣
张晓丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201711099435.6A priority Critical patent/CN107742275B/en
Publication of CN107742275A publication Critical patent/CN107742275A/en
Application granted granted Critical
Publication of CN107742275B publication Critical patent/CN107742275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/20Linear translation of a whole image or part thereof, e.g. panning
    • G06T5/80

Abstract

The invention discloses various information processing methods and electronic equipment, wherein the method comprises the following steps: selecting and obtaining a target image from the original image; acquiring first position information of an edge intersection point of the target image in the original image, and converting the first position information to obtain second spatial position information of the target image in the environment; calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image; and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the target image on the target plane.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to image processing technologies in the field of information processing, and in particular, to an information processing method and an electronic device.
Background
When image processing is performed by using an electronic device, there is a scene in which a partial image in another image is extracted and displayed on another plane; several methods are available to detect and correct a rectangular image using opencv.
However, the foregoing solution has a problem that it cannot be guaranteed that the image obtained after the re-correction can maintain the correct true scale.
Disclosure of Invention
In view of the above, embodiments of the present invention are directed to an information processing method and an electronic device, which can solve at least the above problems in the prior art.
The embodiment of the invention provides an information processing method, which comprises the following steps:
selecting and obtaining a target image from the original image;
acquiring first position information of an edge intersection point of the target image in the original image, and converting the first position information to obtain second spatial position information of the target image in the environment;
calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image;
and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the target image on the target plane.
In the foregoing solution, the calculating, based on the second spatial position information including depth information of the edge intersection of the target image, a proportional relationship corresponding to the edge line of the target image includes:
determining distance information of edge lines between the edge intersections of the target image based on second spatial position information of the edge intersections in the second coordinate system; wherein the second coordinate system is at least capable of representing depth information;
and calculating the proportional relation between the edge lines based on the distance information of the edge lines between the edge intersection points.
In the foregoing solution, the acquiring first position information of the edge intersection of the target image in the original image includes:
detecting the edge of the target image in the original image;
obtaining at least one edge intersection point based on the edge of the target image in the original image;
first position information of the at least one edge intersection point in a first coordinate system of the original image is acquired.
In the foregoing solution, the converting the first position information of the edge intersection of the target image in the first coordinate system corresponding to the original image into the second spatial position information of the spatial position of the target image in the environment in the second coordinate system includes:
and matching first position information of the edge intersection point of the target image in a first coordinate system corresponding to the original image with a frame image of the point cloud according to the timestamp of the current frame of the target image to obtain second space position information of the space position of the target image in the environment in a second coordinate system.
In the above scheme, the selecting a target image from the original image includes:
selecting the target image from the original image based on operation selection;
alternatively, the first and second electrodes may be,
and selecting the target image from the designated position in the original image.
An embodiment of the present invention provides an electronic device, including:
the image acquisition unit is used for selecting and obtaining a target image from the original image;
the conversion unit is used for acquiring first position information of an edge intersection point of the target image in the original image, and converting the first position information to obtain second spatial position information of the target image in the environment; calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image; and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the target image on the target plane.
In the above solution, the converting unit is configured to determine distance information of edge lines between the edge intersections of the target image based on second spatial position information of the edge intersections in the second coordinate system; wherein the second coordinate system is at least capable of representing depth information; and calculating the proportional relation between the edge lines based on the distance information of the edge lines between the edge intersection points.
In the above solution, the converting unit is configured to detect an edge of the target image in the original image; obtaining at least one edge intersection point based on the edge of the target image in the original image; first position information of the at least one edge intersection point in a first coordinate system of the original image is acquired.
In the foregoing solution, the converting unit is configured to match, according to the timestamp of the current frame of the target image, first spatial position information of an edge intersection of the target image in a first coordinate system corresponding to an original image with a frame image of a point cloud, so as to obtain second spatial position information of a spatial position of the target image in an environment where the target image is located in a second coordinate system.
An embodiment of the present invention provides an electronic device, which is characterized by including: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of the aforementioned method when running the computer program.
By adopting the embodiment of the invention, the first position information corresponding to the target image in the original image is converted into the second position information capable of expressing the environment where the target image is located, so that the proportional relation of the true edge line of the target image is obtained, and finally the target image is displayed based on the proportional relation; therefore, the effect of displaying the target images acquired from different planes in the vertical plane can be obtained, the real proportion condition of the target images can be attached, the problems of distortion and the like can not be caused in the adjustment of the target images, and the image adjustment effect is improved.
Drawings
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a scenario 1 according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scenario 2 according to an embodiment of the present invention;
fig. 4 is a schematic view of a scenario in accordance with an embodiment of the present invention 3.
Fig. 5 is a schematic view of a scenario in accordance with an embodiment of the present invention 4.
Fig. 6 is a schematic view of a scenario according to an embodiment of the present invention 5.
Fig. 7 is a schematic view of a composition structure of an electronic device according to an embodiment of the invention.
Detailed Description
The following describes the embodiments in further detail with reference to the accompanying drawings.
The first embodiment,
An embodiment of the present invention provides an information processing method, applied to an electronic device, as shown in fig. 1, including:
step 101: selecting and obtaining a target image from the original image;
step 102: acquiring first position information of an edge intersection point of the target image in the original image, and converting the first position information to obtain second spatial position information of the target image in the environment;
step 103: calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image;
step 104: and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the target image on the target plane.
Here, the electronic device may be a processing-function-equipped device, and for example, may be a desktop, a notebook, a tablet, a smart phone, and the like.
In the foregoing step 101, a manner of obtaining the target image from the original image may be selected as follows: directly taking an image at a specified position of an original image as a target image;
or, according to the selection operation of the user, the target image is selected and obtained. The selection operation of the user can be to perform delineation on aiming at the outline of the target image from the original image so as to obtain the target image; of course, the user may directly click any position of the target image, and then the electronic device selects the target image according to the area where the user clicks.
For example, the current original image is an image as shown in fig. 2, and the user needs to select a target image (i.e., a drawing in the original image) containing a bear; then, the operation shown in fig. 3 is performed, and the user clicks the target image; then, as shown in fig. 4, a target image is selected from the original image; i.e. the picture in the box in fig. 4.
The acquiring first position information of the edge intersection point of the target image in the original image comprises:
detecting the edge of the target image in the original image; obtaining at least one edge intersection point based on the edge of the target image in the original image; first position information of the at least one edge intersection point in a first coordinate system of the original image is acquired.
As shown in fig. 4, the edge of the target image in the original image is detected, at least one edge of the target image is obtained from the original image, and four edge lines exist in the target image in fig. 4.
At least one edge intersection is obtained, which may be obtained for every two adjacent edge lines, for example, based on the four edge lines in fig. 4, two adjacent edge lines intersect to obtain four edge intersections, i.e., the edge intersections shown in fig. 5.
Further, acquiring first position information of the at least one edge intersection point in a first coordinate system of the original image; each first position information of each edge intersection point respectively corresponding to the original image in the corresponding (X, Y) coordinate system may be acquired. That is, as shown in fig. 5, four (x y) coordinate values of the four edge intersection points in the coordinates shown in the figure are obtained.
Further, the converting the first position information of the edge intersection point of the target image in the first coordinate system corresponding to the original image into the second spatial position information of the spatial position of the target image in the environment in the second coordinate system includes:
and matching first position information of the edge intersection point of the target image in a first coordinate system corresponding to the original image with a frame image of the point cloud according to the timestamp of the current frame of the target image to obtain second space position information of the space position of the target image in the environment in a second coordinate system.
It should be noted that the second coordinate system may be a world coordinate system, that is, there are three dimensional coordinates, for example, x, y, and z axes;
the first coordinate system and the second coordinate system may be transformed based on a preset formula, and the second coordinate system obtained after the transformation has depth information, so that at least the distance between the target image in the original picture and the camera can be expressed, or it can be considered that the distance between each edge point in the target image and the camera can be obtained.
On the basis of the foregoing, the calculating, based on the second spatial position information including depth information of the edge intersection of the target image, a proportional relationship corresponding to the edge line of the target image includes:
determining distance information of edge lines between the edge intersections of the target image based on second spatial position information of the edge intersections in the second coordinate system;
and calculating the proportional relation between the edge lines based on the distance information of the edge lines between the edge intersection points.
Second spatial position information of each edge intersection point in a second coordinate system is obtained; then, at least one edge line in the second coordinate system based on the edge intersection point in the second coordinate system can be obtained;
further, distance information (which can be understood as length information of the edge line) of the edge line in the second coordinate system can be obtained;
further, calculating a proportional relation between the edge lines; for example, four edge lines 1, 2, 3, and 4 can be obtained in the second coordinate system; the proportional relation of the edge lines 1, 2, 3 and 4 can be directly calculated.
And finally, projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image in a second coordinate system, and displaying the target image on the target plane, wherein image correction can be obtained by using a projection algorithm such as opencv.
By the scheme, the distance between 2 points can be accurately detected by using a tango system, so that the length and the width of each line segment are detected, and the length and the width are set to the length and the width of the target image, so that correct proportion and size can be obtained; then using projection algorithm such as opencv, image correction is obtained.
Further description is made in conjunction with fig. 4-6, including:
opening the tango and previewing, and detecting each edge of the rgb image by using opencv, as shown in fig. 4;
passing the intersection of the edges to tango, fig. 5;
matching the frame image of the point cloud by the tan according to the timestamp of the current frame of the RGB image, so that coordinates on a screen can be converted into world coordinates, and the distance between 2 points can be calculated according to the intersection;
according to the coordinates of the current point and the distance of each line segment, a graphic image with a correct scale is corrected, as shown in fig. 6.
Therefore, by adopting the scheme, the first position information corresponding to the target image in the original image can be converted into the second position information capable of expressing the environment where the target image is located, so that the proportional relation of the true edge line of the target image is obtained, and finally the target image is displayed based on the proportional relation; therefore, the effect of displaying the target images acquired from different planes in the vertical plane can be obtained, the real proportion condition of the target images can be attached, the problems of distortion and the like can not be caused in the adjustment of the target images, and the image adjustment effect is improved.
Example II,
An embodiment of the present invention provides an electronic device, as shown in fig. 7, including:
an image obtaining unit 71, configured to select a target image from the original image;
a conversion unit 72, configured to acquire first position information of an edge intersection of the target image in the original image, and convert the first position information to obtain second spatial position information of the target image in the environment where the target image is located; calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image; and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the target image on the target plane.
Here, the electronic device may be a processing-function-equipped device, and for example, may be a desktop, a notebook, a tablet, a smart phone, and the like.
The aforementioned image acquisition unit 71 for directly taking an image located at a specified position of the original image as a target image;
or, according to the selection operation of the user, the target image is selected and obtained. The selection operation of the user can be to perform delineation on aiming at the outline of the target image from the original image so as to obtain the target image; of course, the user may directly click any position of the target image, and then the electronic device selects the target image according to the area where the user clicks.
For example, the current original image is an image as shown in fig. 2, and the user needs to select a target image (i.e., a drawing in the original image) containing a bear; then, the operation shown in fig. 3 is performed, and the user clicks the target image; then, as shown in fig. 4, a target image is selected from the original image; i.e. the picture in the box in fig. 4.
The conversion unit 72 is configured to detect an edge of the target image in the original image; obtaining at least one edge intersection point based on the edge of the target image in the original image; first position information of the at least one edge intersection point in a first coordinate system of the original image is acquired.
As shown in fig. 4, the edge of the target image in the original image is detected, at least one edge of the target image is obtained from the original image, and four edge lines exist in the target image in fig. 4.
At least one edge intersection is obtained, which may be obtained for every two adjacent edge lines, for example, based on the four edge lines in fig. 4, two adjacent edge lines intersect to obtain four edge intersections, i.e., the edge intersections shown in fig. 5.
Further, acquiring first position information of the at least one edge intersection point in a first coordinate system of the original image; each first position information of each edge intersection point respectively corresponding to the original image in the corresponding (X, Y) coordinate system may be acquired. That is, as shown in fig. 5, four (x y) coordinate values of the four edge intersection points in the coordinates shown in the figure are obtained.
Further, the converting unit 72 is configured to match, according to the timestamp of the current frame of the target image, first spatial position information of the edge intersection of the target image in a first coordinate system corresponding to the original image with a frame image of the point cloud to obtain second spatial position information of a spatial position of the target image in the environment in a second coordinate system.
It should be noted that the second coordinate system may be a world coordinate system, that is, there are three dimensional coordinates, for example, x, y, and z axes;
the first coordinate system and the second coordinate system may be transformed based on a preset formula, and the second coordinate system obtained after the transformation has depth information, so that at least the distance between the target image in the original picture and the camera can be expressed, or it can be considered that the distance between each edge point in the target image and the camera can be obtained.
On the basis of the foregoing, the calculating, based on the second spatial position information including depth information of the edge intersection of the target image, a proportional relationship corresponding to the edge line of the target image includes:
determining distance information of edge lines between the edge intersections of the target image based on second spatial position information of the edge intersections in the second coordinate system;
and calculating the proportional relation between the edge lines based on the distance information of the edge lines between the edge intersection points.
Second spatial position information of each edge intersection point in a second coordinate system is obtained; then, at least one edge line in the second coordinate system based on the edge intersection point in the second coordinate system can be obtained;
further, distance information (which can be understood as length information of the edge line) of the edge line in the second coordinate system can be obtained;
further, calculating a proportional relation between the edge lines; for example, four edge lines 1, 2, 3, and 4 can be obtained in the second coordinate system; the proportional relation of the edge lines 1, 2, 3 and 4 can be directly calculated.
And finally, projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image in a second coordinate system, and displaying the target image on the target plane, wherein image correction can be obtained by using a projection algorithm such as opencv.
By the scheme, the distance between 2 points can be accurately detected by using a tango system, so that the length and the width of each line segment are detected, and the length and the width are set to the length and the width of the target image, so that correct proportion and size can be obtained; then using projection algorithm such as opencv, image correction is obtained.
Further description is made in conjunction with fig. 4-6, including:
opening the tango and previewing, and detecting each edge of the rgb image by using opencv, as shown in fig. 4;
passing the intersection of the edges to tango, fig. 5;
matching the frame image of the point cloud by the tan according to the timestamp of the current frame of the RGB image, so that coordinates on a screen can be converted into world coordinates, and the distance between 2 points can be calculated according to the intersection;
according to the coordinates of the current point and the distance of each line segment, a graphic image with a correct scale is corrected, as shown in fig. 6.
Therefore, by adopting the scheme, the first position information corresponding to the target image in the original image can be converted into the second position information capable of expressing the environment where the target image is located, so that the proportional relation of the true edge line of the target image is obtained, and finally the target image is displayed based on the proportional relation; therefore, the effect of displaying the target images acquired from different planes in the vertical plane can be obtained, the real proportion condition of the target images can be attached, the problems of distortion and the like can not be caused in the adjustment of the target images, and the image adjustment effect is improved.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a processing method of a media file.
Optionally, in this embodiment, the storage medium is configured to store a program for executing the various steps described in the first embodiment.
An embodiment of the present invention further provides a user equipment, which includes: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is configured to execute the steps of the method according to the first embodiment when the computer program is executed.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An information processing method applied to an electronic device, the method comprising:
selecting and obtaining a target image from the original image;
acquiring first position information of an edge intersection point of the target image in a first coordinate system corresponding to the original image, and converting the first position information to obtain second space position information of a space position of the target image in the environment in a second coordinate system;
calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image;
and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the rectangular target image on the target plane.
2. The method according to claim 1, wherein the calculating a proportional relationship corresponding to the edge line of the target image based on the second spatial position information including depth information of the edge intersection of the target image comprises:
determining distance information of edge lines between the edge intersections of the target image based on second spatial position information of the edge intersections in the second coordinate system; wherein the second coordinate system is at least capable of representing depth information;
and calculating the proportional relation between the edge lines based on the distance information of the edge lines between the edge intersection points.
3. The method according to claim 1, wherein the obtaining first position information of the edge intersection of the target image in the first coordinate system corresponding to the original image comprises:
detecting the edge of the target image in the original image;
obtaining at least three edge intersection points based on the edge of the target image in the original image;
first position information of the at least three edge intersection points in a first coordinate system of the original image is acquired.
4. The method according to claim 2, wherein the converting the first position information of the edge intersection point of the target image in the first coordinate system corresponding to the original image into the second position information of the spatial position of the target image in the environment in the second coordinate system comprises:
and matching first position information of the edge intersection point of the target image in a first coordinate system corresponding to the original image with a frame image of the point cloud according to the timestamp of the current frame of the target image to obtain second space position information of the space position of the target image in the environment in a second coordinate system.
5. The method of claim 1, wherein selecting the target image from the original image comprises:
selecting the target image from the original image based on operation selection;
alternatively, the first and second electrodes may be,
and selecting the target image from the designated position in the original image.
6. An electronic device, comprising:
the image acquisition unit is used for selecting and obtaining a target image from the original image;
the conversion unit is used for acquiring first position information of an edge intersection point of the target image in a first coordinate system corresponding to the original image, and converting the first position information to obtain second space position information of a space position of the target image in the environment in a second coordinate system; calculating to obtain a proportional relation corresponding to the edge line of the target image based on second spatial position information containing depth information of the edge intersection of the target image; and projecting the target image to a target plane based on the proportional relation corresponding to the edge line of the target image, and displaying the rectangular target image on the target plane.
7. The electronic device according to claim 6, wherein the conversion unit is configured to determine distance information of edge lines between edge intersections of the target image based on second spatial position information of the edge intersections in the second coordinate system; wherein the second coordinate system is at least capable of representing depth information; and calculating the proportional relation between the edge lines based on the distance information of the edge lines between the edge intersection points.
8. The electronic device according to claim 6, wherein the conversion unit is configured to detect an edge of the target image in the original image; obtaining at least three edge intersection points based on the edge of the target image in the original image; first position information of the at least three edge intersection points in a first coordinate system of the original image is acquired.
9. The electronic device according to claim 7, wherein the converting unit is configured to match first position information of the edge intersection of the target image in a first coordinate system corresponding to the original image with a frame image of the point cloud according to a timestamp of the current frame of the target image, so as to obtain second spatial position information of a spatial position of the target image in the environment in a second coordinate system.
10. An electronic device, comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor is adapted to perform the steps of the method of any one of claims 1 to 5 when running the computer program.
CN201711099435.6A 2017-11-09 2017-11-09 Information processing method and electronic equipment Active CN107742275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711099435.6A CN107742275B (en) 2017-11-09 2017-11-09 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711099435.6A CN107742275B (en) 2017-11-09 2017-11-09 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN107742275A CN107742275A (en) 2018-02-27
CN107742275B true CN107742275B (en) 2021-02-19

Family

ID=61234354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711099435.6A Active CN107742275B (en) 2017-11-09 2017-11-09 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN107742275B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1607551A (en) * 2003-08-29 2005-04-20 三星电子株式会社 Method and apparatus for image-based photorealistic 3D face modeling
JP2010008920A (en) * 2008-06-30 2010-01-14 Hoya Corp Lens position adjustment structure

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855629B (en) * 2012-08-21 2014-09-17 西华大学 Method and device for positioning target object
CN104933755B (en) * 2014-03-18 2017-11-28 华为技术有限公司 A kind of stationary body method for reconstructing and system
CN105094721B (en) * 2014-04-15 2019-04-26 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN104994367B (en) * 2015-06-30 2017-06-13 华为技术有限公司 A kind of image correction method and camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1607551A (en) * 2003-08-29 2005-04-20 三星电子株式会社 Method and apparatus for image-based photorealistic 3D face modeling
JP2010008920A (en) * 2008-06-30 2010-01-14 Hoya Corp Lens position adjustment structure

Also Published As

Publication number Publication date
CN107742275A (en) 2018-02-27

Similar Documents

Publication Publication Date Title
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
US9361731B2 (en) Method and apparatus for displaying video on 3D map
RU2011108115A (en) INFORMATION PROCESSING DEVICE, METHOD OF CARD UPDATE, PROGRAM AND INFORMATION PROCESSING SYSTEM
US9607394B2 (en) Information processing method and electronic device
CN110619807B (en) Method and device for generating global thermodynamic diagram
CN109740487B (en) Point cloud labeling method and device, computer equipment and storage medium
US20240071016A1 (en) Mixed reality system, program, mobile terminal device, and method
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN110310325B (en) Virtual measurement method, electronic device and computer readable storage medium
US10861138B2 (en) Image processing device, image processing method, and program
KR101586071B1 (en) Apparatus for providing marker-less augmented reality service and photographing postion estimating method therefor
CN113160270A (en) Visual map generation method, device, terminal and storage medium
CN104917963A (en) Image processing method and terminal
CN111583329A (en) Augmented reality glasses display method and device, electronic equipment and storage medium
CN107742275B (en) Information processing method and electronic equipment
CN105631938B (en) Image processing method and electronic equipment
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112509058B (en) External parameter calculating method, device, electronic equipment and storage medium
CN114565777A (en) Data processing method and device
CN114913287A (en) Three-dimensional human body model reconstruction method and system
CN113706692A (en) Three-dimensional image reconstruction method, three-dimensional image reconstruction device, electronic device, and storage medium
EP3686786A1 (en) Apparatus and method for congestion visualization
CN111766947A (en) Display method, display device, wearable device and medium
JP5662787B2 (en) Mobile terminal and image processing method
CN113810626A (en) Video fusion method, device and equipment based on three-dimensional map and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant