CN105847679B - Image acquisition method and electronic equipment - Google Patents

Image acquisition method and electronic equipment Download PDF

Info

Publication number
CN105847679B
CN105847679B CN201610184583.7A CN201610184583A CN105847679B CN 105847679 B CN105847679 B CN 105847679B CN 201610184583 A CN201610184583 A CN 201610184583A CN 105847679 B CN105847679 B CN 105847679B
Authority
CN
China
Prior art keywords
image
markers
range
electronic device
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610184583.7A
Other languages
Chinese (zh)
Other versions
CN105847679A (en
Inventor
黄志辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610184583.7A priority Critical patent/CN105847679B/en
Publication of CN105847679A publication Critical patent/CN105847679A/en
Application granted granted Critical
Publication of CN105847679B publication Critical patent/CN105847679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses image acquisition methods and electronic equipment, wherein the method comprises the steps of determining the relative relation between a user of the electronic equipment and at least markers when the image acquisition is prepared, determining a target view range based on the relative relation, and carrying out the image acquisition according to the determined target view range.

Description

Image acquisition method and electronic equipment
Technical Field
The invention relates to a shooting technology, in particular to image acquisition methods and electronic equipment.
Background
At present, electronic equipment such as mobile phones and tablet computers can acquire images of target objects through cameras and obtain shot pictures through shooting, and the function of the mobile phones and the tablet computers enables the shooting to be possible anytime and anywhere, so that great convenience is brought to users.
However, the inventor finds that the mobile phone and the tablet personal computer are usually placed in a backpack or a clothes/trousers pocket when not used, and when a good scene is discovered accidentally and shooting is performed in advance, the electronic device probably misses the scene, for example, when a bus stays at a station A in the process of taking a bus by a user 1, the user 1 finds that beautiful scenes exist at the position, at the moment, the user 1 takes the mobile phone out of the backpack or the clothes/trousers pocket, determines a view range through series of tedious operations such as unlocking, starting a camera, focusing and the like, the bus is probably started from the station A to the next stations, shooting of the beautiful scenes cannot be realized, and the user experience is reduced undoubtedly.
Disclosure of Invention
In order to solve the existing technical problem, the embodiment of the invention provides image acquisition methods and electronic equipment, which at least can realize quick shooting.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides image acquisition methods, which comprise the following steps:
in preparation for the image acquisition to be performed,
determining a relative relationship of a user of the electronic device to at least tags;
determining a target view range based on the relative relationship;
and acquiring an image according to the determined target view range.
In the above solution, the determining a relative relationship between a user of the electronic device and at least markers, and determining the target viewing range based on the relative relationship includes:
and determining a range determined based on the line of sight of the user of the electronic equipment for observing the at least markers according to the position relation between the user of the electronic equipment and the at least markers, and obtaining the target framing range.
In the above solution, the determining a range determined based on a line of sight of a user of the electronic device observing the at least markers to obtain the target viewing range includes:
when the user observes the at least markers,
acquiring a image comprising the at least markers;
judging whether the th image meets a preset condition or not;
when the th image is judged to meet the preset condition, th parameters are obtained, the target view range is a three-dimensional view range, and the th parameters are characterized in that at least plane areas in the three-dimensional view range are obtained.
In the above scheme, the method further comprises:
and determining a second parameter based on the th parameter, wherein the second parameter is characterized by depth information of the stereoscopic viewing range.
In the above solution, the acquiring images including at least markers, and determining whether the image satisfies a preset condition includes:
acquiring the th image, the th image comprising images of at least markers;
judging whether the image of the marker is a preset image or not, and generating a th judgment result;
when the judgment result indicates that the image of the marker is a predetermined pattern, determining that the image satisfies the predetermined condition.
The embodiment of the present invention further provides electronic devices, where the electronic devices include:
a processor for determining a relative relationship of a user of the electronic device to at least markers in preparation for image acquisition;
determining a target view range based on the relative relationship;
and the collector is used for collecting images according to the determined target view range.
In the foregoing solution, the processor is further configured to:
and determining a range determined based on the line of sight of the user of the electronic equipment for observing the at least markers according to the position relation between the user of the electronic equipment and the at least markers, and obtaining the target framing range.
In the foregoing solution, the processor is further configured to:
when the user observes the at least markers,
acquiring a image comprising the at least markers;
judging whether the th image meets a preset condition or not;
when the th image is judged to meet the preset condition, th parameters are obtained, the target view range is a three-dimensional view range, and the th parameters are characterized in that at least plane areas in the three-dimensional view range are obtained.
In the foregoing solution, the processor is further configured to:
and determining a second parameter based on the th parameter, wherein the second parameter is characterized by depth information of the stereoscopic viewing range.
In the foregoing solution, the processor is further configured to:
when the th image is acquired by the collector, the th image comprises images of at least markers;
judging whether the image of the marker is a preset image or not, and generating a th judgment result;
when the judgment result indicates that the image of the marker is a predetermined pattern, determining that the image satisfies the predetermined condition.
The image acquisition method and the electronic equipment provided by the embodiment of the invention comprise the steps of determining the relative relation between a user of the electronic equipment and at least markers when the image acquisition is prepared, determining the target view range based on the relative relation, and carrying out the image acquisition according to the determined target view range.
Drawings
FIG. 1 is a schematic flow chart illustrating an implementation of a th embodiment of an image capturing method according to the present invention;
FIG. 2 is a schematic flow chart illustrating an implementation of a second embodiment of an image acquisition method according to the present invention;
FIGS. 3(a), (b) are schematic diagrams of images of markers provided by the present invention;
FIG. 4 is a second schematic view of an image of a marker provided in accordance with the present invention;
FIGS. 5(a) and (b) are schematic views showing images of the marker provided by the present invention;
FIG. 6 is a schematic diagram of a position relationship between an electronic device and a marker according to the present invention;
FIG. 7 is a block diagram of an electronic device in accordance with an embodiment of the present invention;
fig. 8 is a block diagram illustrating a second embodiment of an electronic device according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, and it should be understood that the preferred embodiments described below are only for the purpose of illustrating and explaining the present invention, and are not to be construed as limiting the present invention.
In the following embodiments of the image acquisition method and the electronic device provided by the invention, the related electrical sub-device includes, but is not limited to, various types of computers such as an industrial control computer, a personal computer and the like, a -style computer, a tablet computer, a mobile phone, an electronic reader and the like, of course, the -th electronic device can also be wearable electronic devices such as smart glasses, a smart watch and the like, and a preferred object of the -th electronic device in the embodiment of the invention is smart glasses.
Example
The rd embodiment of the image capturing method is applied to th electronic equipment, wherein the th electronic equipment is provided with a image capturing unit, the image capturing unit can capture/capture images, the image capturing unit can be a camera, and the th electronic equipment is preferably smart glasses.
Fig. 1 is a schematic flow chart of an implementation of an image acquisition method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step 101, determining the relative relation between a user of the electronic equipment and at least markers when preparing to acquire images;
in practice, waves with certain energy emitted from objects may reach another object through air propagation, the air propagation may be hindered by 5 determined obstacles such as trees, dust, etc., the waves may be attenuated to a certain degree by when reaching another object, another may obtain the distance between itself and the emitting object through the difference between the energy received by the object and the energy propagated by the emitting object and the wavelength of the waves, and may calculate the distance between itself and the emitting object through the second electronic device as a transmitting party of the waves, the as an electronic device, and the electronic device may calculate the distance between itself and the receiving party through which the second electronic device is located in a south direction of the emitting object and through which longitude and latitude information of the second electronic device is located in a second electronic device, such as a GPS module 3637 may calculate the distance between itself and the emitting object through a GPS electronic device based on the principles of positioning of the second electronic device, such as GPS module 3936 may calculate the distance between itself and the second electronic device and the electronic device.
The th electronic device performs image acquisition and image recognition on each specific object, and when the specific object is recognized to have a specific shape or a specific color or a specific two-dimensional code pattern on the object, the th electronic device acquires the distance and the orientation between the marker and the self-electronic device, in practical application, the th electronic device performs wave propagation in the direction of the marker, and when the wave reaches the marker, a reflected wave exists and is reflected back to the th electronic device, and the th electronic device can calculate the direction of the marker relative to the self-electronic device according to the direction of the outgoing wave and the reflected wave, and can calculate the distance of the marker relative to the self-electronic device according to the energy of the outgoing wave and the reflected wave, so that the second electronic device can know the relative position relationship between the marker and the self-electronic device when the marker is marked on the specific object.
Step 102: determining a target view range based on the relative relationship;
the execution subject of step 102 is the th electronic device.
The th electronic device determines a scene range to be photographed (target scene range) based on the acquired relative relationship with the at least markers.
Step 103: and acquiring an image according to the determined target view range.
The execution subject of step 103 is the th electronic device.
Here, the th electronic device photographs a scene/person within the target viewing range, resulting in a photographed image.
Therefore, in the embodiment, when the th electronic device is to take an image, the relative relationship between the electronic device and at least markers is firstly obtained, then the target view range is determined by the relative relationship, and finally the scene/person within the target view range is taken to obtain a taken image, wherein the view range (target view range) required by the electronic device to take the image is determined by the relative relationship between the electronic device and at least markers, namely, the application determines the taking range by the relative relationship between the electronic device and the markers.
Example two
The second embodiment of the image acquisition method provided by the invention is applied to th electronic equipment, wherein the th electronic equipment is provided with a image acquisition unit, the image acquisition unit can be used for shooting/acquiring images, the image acquisition unit can be a camera, and the electronic equipment is preferably intelligent glasses.
Fig. 2 is a schematic flow chart illustrating an implementation of a second embodiment of the image acquisition method provided by the present invention. As shown in fig. 2, the method includes:
step 201, when image acquisition is ready, determining a range determined based on the sight of a user of the electronic equipment for observing at least markers according to the position relation between the user of the electronic equipment and the at least markers to obtain the target viewing range;
the execution subject of step 201 is th electronic device.
The target viewing range is a three-dimensional range, and the three-dimensional range is generally composed of a plane range and depth information (depth information), wherein the plane range is a plane range covered by a staying sight line on a mark when the wearer observes the at least marks, and the depth information is an observation depth reached by the wearer when the wearer observes the at least marks and penetrates through the mark.
The tag may be an electronic device such as the second electronic device or may be a specific object, as described in the foregoing embodiment .
Step 202: acquiring an image according to the determined target view range;
the execution subject of step 202 is the th electronic device.
Here, the th electronic device photographs a scene/person within the target viewing range, resulting in a photographed image.
The method comprises the steps of obtaining a target view range according to the position relation between a user of the electronic equipment and the markers, wherein the target view range is determined based on the fact that the user of the electronic equipment observes the sight line of at least markers, and shooting the scenery/people in the target view range to acquire images.
In preferred embodiments of the present invention, the determining the target viewing range based on a range determined by a user of the electronic device viewing a line of sight of the at least markers comprises:
the method comprises the steps of obtaining a th image comprising at least markers when the user observes the at least markers, judging whether the th image meets preset conditions, obtaining a th parameter when the th image meets the preset conditions, wherein the target viewing range is a three-dimensional viewing range, and the th parameter is characterized by at least plane areas in the three-dimensional viewing range.
Wherein after obtaining the th parameter, the method further comprises:
preferably, the depth information is such that the wearer's line of sight of at least markers through the at least planar regions.
The method comprises the steps of acquiring a image containing at least markers, judging whether a image meets a preset condition, acquiring a image, wherein the image contains at least marker images, judging whether the marker images are predetermined graphs, generating a judgment result, and determining that the image meets the preset condition when the judgment result shows that the marker images are the predetermined graphs.
When the marker is an electronic device (second electronic device) having at least four sensors respectively worn on the thumb and index finger of the left and right hand of a user, where the user may preferably be the wearer of the th electronic device, when the user draws a figure as shown in fig. 3(a) or fig. 3(b) by the ratio of the thumb and index finger of the left and right hand, the th electronic device captures the image to obtain a th image including the figure, and the th image includes a rectangular frame formed by the thumb and index finger of the left and right hand of the user or a rectangular frame formed by the extended lines of the thumb and index finger of the left and right hand, and since the sensors are both worn on the thumb and index finger, the shape of the images of the four sensors in the rectangular frame is also a rectangular frame, it is determined whether the images of the four sensors are at this time as a predetermined figure, such as a rectangular frame figure, if it is determined that the images of the th image satisfy a predetermined condition, and a shooting range is obtained by measuring which angle of the sensor is in the horizontal plane, it is also determined that the angle of the image of the horizontal plane is a horizontal plane, and if the angle of the horizontal plane is also determined as a horizontal plane, it is also determined that the angle of the target person, it is also can be obtained as a horizontal plane, and if the angle of the horizontal plane, then it is 0 th image of the horizontal plane, then it is also determined.
When the marker is an object with a specific shape, such as a rectangular frame shown in fig. 4, a user such as a wearer can hold the object by hands or by an auxiliary device, when the wearer observes the specific object, the -th electronic device collects the specific object to obtain a -th image, judges whether the image of the specific object in the -th image is rectangular, determines that the image of the specific object is a predetermined image when the image of the specific object in the -th image is judged to be rectangular, determines that the -th image meets a predetermined condition, and acquires a shooting range.
When the marker may be an object capable of expressing specific information, such as an object having predetermined markers thereon, as shown in fig. 5(a), the predetermined markers are located at the lower left corner of a rectangular frame, and when the wearer views the rectangular frame, the electronic device captures the rectangular frame to obtain a th image, determines whether the image of the lower left corner pattern in the rectangular frame in the th image is a two-dimensional code pattern, and when it is determined that the image is a predetermined pattern, determines the image of the lower left corner pattern in the rectangular frame to be a predetermined pattern, determines that the th image satisfies predetermined conditions, and obtains a th parameter, or, as shown in fig. 5(b), the region at the four corners of the rectangular frame is colored red as a predetermined marker, and the electronic device captures the rectangular frame to obtain a th image, determines whether the color of the pattern at the four corners in the rectangular frame in the th image is colored red, and when it is determined that the region at the four corners in the rectangular frame is a predetermined pattern, determines that the image satisfies predetermined pattern acquisition range , and obtains a predetermined image capture range.
In practical applications, as shown in fig. 6, the marker may be closer to the th electronic device, as the marker is in position 1, or farther from the th electronic device, as the marker is in position 2, when the distance between the marker and the th electronic device is different, the target viewing range determined based on the line of sight of the wearer viewing the marker is also different, when image capture is performed on the same scene a, as the cup, the th electronic device captures an image through the marker in position 1, at a viewing angle different from that captured when the marker is in position 2, the cup drawn by the dashed line in fig. 6 is a cup with a line of sight through the marker drawn by the dashed line, the cup drawn by the solid line is a cup with a line of sight through the marker drawn by the dashed line, the cup viewed by the camera drawn by the solid line, the camera is a cup with a line of sight through the marker drawn by the camera, as an example, if the marker is in position 1 and the image of the image captured by the camera as shown in fig. 3(a) or 3(b), then the image capture range of the right camera is a camera, the camera is a camera, the camera is a camera shooting range of the camera, the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera shooting range of the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera is a shooting range of the camera, the camera is a.
EXAMPLE III
In the embodiment of the th electronic device, the th electronic device is provided with a th image capturing unit, the image capturing unit can be a camera, and the th electronic device is preferably smart glasses.
Fig. 7 is a structural diagram of a th electronic device according to an embodiment of the present invention, as shown in fig. 7, the th electronic device includes:
a processor 10 for determining the relative relationship of a user of the electronic device to at least tags in preparation for image acquisition;
determining a target view range based on the relative relationship;
in practice, a wave with energy emitted from 2 objects may reach another object through air propagation, the air propagation may be hindered by certain obstacles such as trees, dust, etc., the wave may be attenuated to a degree of when reaching another object, another object may be able to calculate the distance between itself and the emitting object by the difference between the energy received by the other object and the energy when the wave propagates from the emitting object and the wavelength of the wave, the distance between itself and the emitting object may be calculated by comparing the second electronic device as the emitting party of the wave, the receiving party of the second electronic device as the receiving party of the second electronic device , the second electronic device may be located by comparing the second electronic device with the second electronic device's own global positioning information, such as the longitude and latitude information, the second electronic device may be located by a second electronic device's embedded GPS electronic module, such as the GPS electronic device's embedded GPS electronic module .
The th electronic device, specifically the collector 11, performs image acquisition on each specific object, the processor 10 performs image recognition, and when the specific object is recognized as having a specific shape or the object has a predetermined color or a predetermined two-dimensional code pattern, the th electronic device, specifically the processor 10, acquires the distance and the orientation between the marker and itself, the th electronic device performs wave propagation in the direction of the marker, and when the wave reaches the marker, a reflected wave exists, and the wave is reflected back to the th electronic device, and the th electronic device can calculate the direction of the marker relative to itself from the outgoing wave and the reflected wave, and can calculate the distance of the marker relative to itself from the outgoing wave and the reflected wave energy, so that the relative position between the second electronic device, which is marked as the specific object, and the reflected wave can be known.
The -th electronic device, in particular the processor 10, determines a scene range to be captured (target scene range) based on the obtained relative relationship with the at least markers.
And the collector 11 is used for collecting images according to the determined target view range.
Here, the collector 11 is the aforementioned image processing unit.
Here, the th electronic device, specifically, the finder 11, photographs a scene/person within the target viewing range, resulting in a photographed image.
Therefore, in the embodiment, when the th electronic device is to take an image, the relative relationship between the electronic device and at least markers is firstly obtained, then the target view range is determined by the relative relationship, and finally the scene/person in the target view range is taken to obtain a taken image, wherein the view range (target view range) required by the electronic device to take the image is determined by the relative relationship between the electronic device and at least markers, namely, the image taking range is determined by the markers and the th electronic device together.
Example four
In a second embodiment of the electronic device, the electronic device has a image capturing unit, which can capture/capture images, and the image capturing unit can be a camera, and the electronic device is preferably smart glasses.
Fig. 8 is a structural diagram of a th electronic device according to a second embodiment of the present invention, and as shown in fig. 8, the th electronic device includes:
a processor 20, configured to determine, according to a positional relationship between a user of the electronic device and the at least markers, a range determined based on a line of sight of the user of the electronic device for observing the at least markers, so as to obtain the target viewing range;
when the th electronic device, particularly the acquirer 21, is to perform image acquisition, the processor 20 obtains the position relationship, such as the position and/or the distance, between the 38th electronic device and at least markers, the at least markers are observed by the wearer of the th electronic device, and according to the position relationship between the th electronic device and at least markers, the processor 20 determines a range determined based on the observation line of sight of the wearer to obtain a target viewing range, wherein the target viewing range is a stereoscopic range which is generally composed of a planar range and depth information (depth information), the planar range is a planar range covered by the line of sight of the at least markers when the wearer observes the line of sight of the at least markers stays on the markers, and the depth information is an observation depth which the wearer observes the line of sight of the at least markers which passes through the markers.
The marker may be an electronic device such as the second electronic device, or may be a specific object, as described in the third embodiment.
The collector 21 is used for collecting images according to the determined target view range;
the electronic device, specifically the collector 21, shoots the scenery/people in the target view range to get the shot image.
The method comprises the steps of obtaining a target view range according to the position relation between a user of the electronic equipment and the markers, wherein the target view range is determined based on the fact that the user of the electronic equipment observes the sight line of at least markers, and shooting the scenery/people in the target view range to acquire images.
In preferred embodiments of the present invention, the processor 20 is further configured to obtain a th image including the at least markers when the user observes the at least markers, determine whether the th image meets a preset condition, and obtain a th parameter when the th image meets the preset condition, wherein the target viewing range is a stereoscopic viewing range, and the th parameter is characterized by at least planar regions in the stereoscopic viewing range.
Wherein the processor 20 is further configured to:
and determining a second parameter based on the th parameter, wherein the second parameter is characterized by depth information of the stereoscopic viewing range.
Wherein the processor 20 is further configured to:
when the th image is acquired by the collector, the th image comprises images of at least markers;
judging whether the image of the marker is a preset image or not, and generating a th judgment result;
when the judgment result indicates that the image of the marker is a predetermined pattern, determining that the image satisfies the predetermined condition.
When the marker is an electronic device (second electronic device) having at least four sensors respectively worn on the thumb and index finger of the user's left and right hands, where the user may preferably be the wearer of the th electronic device, when the user draws a pattern as shown in fig. 3(a) or fig. 3(b) by the ratio of the thumb and index finger of the left and right hands, the th electronic device, specifically the collector 21, collects the image to obtain a th image including the pattern, the th image includes a rectangular frame formed by the thumb and index finger of the user's left and right hands or a rectangular frame formed by the extended lines of the thumb and index finger of the left and right hands, and the processor 20 determines whether the image of the marker is a predetermined pattern, i.e., whether the image of the four sensors is a predetermined pattern, i.e., a rectangular frame pattern, if it is determined that the rectangular frame pattern is a rectangular frame pattern, it is determined that the th image satisfies a predetermined condition, and if it is determined that the angle of the image of the target person is within a horizontal plane, it is determined that the angle of the target person is within the horizontal plane, and if it is within the angle of the horizontal plane measured by the angle of the target person, it is determined that the angle of the target person is within the horizontal plane, it is within the horizontal plane, and it is determined that the angle of the target person is within the target angle of the target person.
When the marker is an object with a specific shape, such as a rectangular frame shown in fig. 4, a user, such as a wearer, can hold the object with a hand or by an auxiliary device, when the wearer observes the specific object, the th electronic device, specifically, the collector 21 collects the specific object to obtain the th image, the processor 20 determines whether the image of the specific object in the th image is rectangular, and when the image of the specific object in the th image is determined to be rectangular, determines that the image of the specific object is a predetermined image, determines that the th image satisfies a predetermined condition, and obtains a shooting range.
When the marker may be an object capable of expressing specific information, such as an object having predetermined markers thereon, as shown in fig. 5(a), the predetermined markers are located at the lower left corner of a rectangular frame, and when the wearer views the rectangular frame, the electronic device, specifically, the acquirer 21, acquires the rectangular frame to obtain a th image, the processor 20 determines whether the image of the lower left corner pattern in the rectangular frame in the th image is a two-dimensional code pattern, and when it is determined that the image is a predetermined pattern, the image of the lower left corner pattern in the rectangular frame is determined to be a predetermined pattern, and the th image satisfies a predetermined condition, and the th parameter is obtained, or, as shown in fig. 5(b), the region at the four corners of the rectangular frame is painted with red as the predetermined marker, the electronic device, specifically, the acquirer 21 acquires the rectangular frame to obtain a th image, the processor 20 determines whether the color of the pattern at the four corners in the rectangular frame in the th image is red, and when it is determined that the left corner pattern in the rectangular frame in the th image satisfies the predetermined condition, and the image acquisition range is determined that the predetermined pattern acquisition range is .
In practical applications, as shown in fig. 6, the marker may be closer to the th electronic device, as the marker is in position 1, or farther from the th electronic device, as the marker is in position 2, when the distance between the marker and the th electronic device is different, the target view range determined based on the line of sight of the wearer viewing the marker is also different, when image capture is performed on the same scene a, as the cup, the th electronic device captures an image through the marker in position 1, at a viewing angle different from that captured when the marker is in position 2, when image capture is performed on the same scene a, as the cup, the cup drawn by the dashed line is a cup with a line of sight through the marker drawn by the dashed line, the cup drawn by the solid line is a cup with a line of sight through the marker drawn by the dashed line and viewed by the camera, as the cup, as a vertical camera, if the marker is drawn by the solid line in position 1 and the image of the marker drawn by the dashed line, as shown in fig. 3 a camera, or 3 b, then the horizontal direction, the vertical camera is taken, the vertical camera, the vertical direction of the vertical direction, the vertical.
Furthermore, the present invention may take the form of a computer program product embodied on or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
It is to be understood that each flow and/or block in the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions which can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flow diagram flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (6)

1, a method of image acquisition, the method comprising:
in preparation for the image acquisition to be performed,
determining the relative relationship between a user of the electronic equipment and at least markers, wherein the markers are objects with relative position relationship with the electronic equipment;
determining a target view range based on the relative relationship;
acquiring an image according to the determined target view range;
wherein the determining the relative relationship between the user of the electronic equipment and at least markers and the determining the target viewing range based on the relative relationship comprise:
determining a range determined based on the line of sight of the user of the electronic equipment for observing the at least markers according to the position relation between the user of the electronic equipment and the at least markers, and obtaining the target framing range;
wherein the determining obtains the target viewing range based on a range determined by a line of sight of a user of the electronic device viewing the at least markers, comprising:
when the user observes the at least markers,
acquiring a image comprising the at least markers;
judging whether the th image meets a preset condition or not;
when the th image is judged to meet the preset condition, th parameters are obtained, the target view range is a three-dimensional view range, and the th parameters are characterized in that at least plane areas in the three-dimensional view range are obtained.
2. The method of claim 1, further comprising:
and determining a second parameter based on the th parameter, wherein the second parameter is characterized by depth information of the stereoscopic viewing range.
3. The method according to claim 1 or 2, wherein the obtaining a image comprising the at least markers, and the determining whether the image satisfies a preset condition comprises:
acquiring the th image, the th image comprising images of at least markers;
judging whether the image of the marker is a preset image or not, and generating a th judgment result;
when the judgment result indicates that the image of the marker is a predetermined pattern, determining that the image satisfies the predetermined condition.
An electronic device of the kind , the electronic device comprising:
the processor is used for determining the relative relation between the user of the electronic equipment and at least markers when preparing for image acquisition, wherein the markers are objects with relative position relation with the electronic equipment;
determining a target view range based on the relative relationship;
the collector is used for collecting images according to the determined target view finding range;
wherein the processor is further configured to:
determining a range determined based on the line of sight of the user of the electronic equipment for observing the at least markers according to the position relation between the user of the electronic equipment and the at least markers, and obtaining the target framing range;
wherein the processor is further configured to:
when the user observes the at least markers,
acquiring a image comprising the at least markers;
judging whether the th image meets a preset condition or not;
when the th image is judged to meet the preset condition, th parameters are obtained, the target view range is a three-dimensional view range, and the th parameters are characterized in that at least plane areas in the three-dimensional view range are obtained.
5. The electronic device of claim 4, wherein the processor is further configured to:
and determining a second parameter based on the th parameter, wherein the second parameter is characterized by depth information of the stereoscopic viewing range.
6. The electronic device of claim 4 or 5, wherein the processor is further configured to:
when the th image is acquired by the collector, the th image comprises images of at least markers;
judging whether the image of the marker is a preset image or not, and generating a th judgment result;
when the judgment result indicates that the image of the marker is a predetermined pattern, determining that the image satisfies the predetermined condition.
CN201610184583.7A 2016-03-28 2016-03-28 Image acquisition method and electronic equipment Active CN105847679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610184583.7A CN105847679B (en) 2016-03-28 2016-03-28 Image acquisition method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610184583.7A CN105847679B (en) 2016-03-28 2016-03-28 Image acquisition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105847679A CN105847679A (en) 2016-08-10
CN105847679B true CN105847679B (en) 2020-01-31

Family

ID=56583947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610184583.7A Active CN105847679B (en) 2016-03-28 2016-03-28 Image acquisition method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105847679B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651270B (en) * 2019-10-12 2024-08-02 北京七鑫易维信息技术有限公司 Gaze information determining method and device, terminal equipment and display object
CN111314602B (en) * 2020-02-17 2021-09-17 浙江大华技术股份有限公司 Target object focusing method, target object focusing device, storage medium and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1829244A (en) * 2005-03-04 2006-09-06 鸿富锦精密工业(深圳)有限公司 Mobile phone with three-dimensional camera function
CN102427541A (en) * 2011-09-30 2012-04-25 深圳创维-Rgb电子有限公司 Method and device for displaying three-dimensional image
CN102695070A (en) * 2012-06-12 2012-09-26 浙江大学 Depth consistency fusion processing method for stereo image
CN103259978A (en) * 2013-05-20 2013-08-21 邱笑难 Method for photographing by utilizing gesture
CN104243791A (en) * 2013-06-19 2014-12-24 联想(北京)有限公司 Information processing method and electronic device
CN104320587A (en) * 2014-11-12 2015-01-28 南京汉图信息技术有限公司 Method for automatically obtaining shooting range of outdoor pan-tilt camera
CN104378549A (en) * 2014-10-30 2015-02-25 东莞宇龙通信科技有限公司 Snapshot method and device and terminal
CN104793749A (en) * 2015-04-30 2015-07-22 小米科技有限责任公司 Intelligent glasses and control method and device thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1829244A (en) * 2005-03-04 2006-09-06 鸿富锦精密工业(深圳)有限公司 Mobile phone with three-dimensional camera function
CN102427541A (en) * 2011-09-30 2012-04-25 深圳创维-Rgb电子有限公司 Method and device for displaying three-dimensional image
CN102695070A (en) * 2012-06-12 2012-09-26 浙江大学 Depth consistency fusion processing method for stereo image
CN103259978A (en) * 2013-05-20 2013-08-21 邱笑难 Method for photographing by utilizing gesture
CN104243791A (en) * 2013-06-19 2014-12-24 联想(北京)有限公司 Information processing method and electronic device
CN104378549A (en) * 2014-10-30 2015-02-25 东莞宇龙通信科技有限公司 Snapshot method and device and terminal
CN104320587A (en) * 2014-11-12 2015-01-28 南京汉图信息技术有限公司 Method for automatically obtaining shooting range of outdoor pan-tilt camera
CN104793749A (en) * 2015-04-30 2015-07-22 小米科技有限责任公司 Intelligent glasses and control method and device thereof

Also Published As

Publication number Publication date
CN105847679A (en) 2016-08-10

Similar Documents

Publication Publication Date Title
CN106920279B (en) Three-dimensional map construction method and device
CN107113415B (en) The method and apparatus for obtaining and merging for more technology depth maps
CN106871878B (en) Hand-held range unit and method, the storage medium that spatial model is created using it
CN111815675B (en) Target object tracking method and device, electronic equipment and storage medium
JP2021520978A (en) A method for controlling the interaction between a virtual object and a thrown object, its device, and a computer program.
EP2993894B1 (en) Image capturing method and electronic apparatus
CN110555882A (en) Interface display method, device and storage medium
US11416719B2 (en) Localization method and helmet and computer readable storage medium using the same
CN107103056B (en) Local identification-based binocular vision indoor positioning database establishing method and positioning method
US20170263014A1 (en) Electronic device and control method therefor
CN108332748B (en) Indoor visible light positioning method and device
KR20140140855A (en) Method and Apparatus for controlling Auto Focus of an photographing device
CN105222717B (en) A kind of subject matter length measurement method and device
CN112785682A (en) Model generation method, model reconstruction method and device
CN110006340A (en) A kind of dimension of object measurement method and electronic equipment
CN114022532A (en) Height measuring method, height measuring device and terminal
CN106291519A (en) Distance-finding method and device
CN114600162A (en) Scene lock mode for capturing camera images
CN105847679B (en) Image acquisition method and electronic equipment
CN112052701B (en) Article taking and placing detection system, method and device
US9167166B2 (en) Image display device, imaging apparatus mounted with image display device as finder device, and image display method
CN112308103B (en) Method and device for generating training samples
CN110874699B (en) Method, device and system for recording logistics information of article
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN107888827A (en) Image processing method and related product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant