CN116600198A - Focal length control method and device of intelligent glasses, electronic equipment and storage medium - Google Patents
Focal length control method and device of intelligent glasses, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116600198A CN116600198A CN202310424776.5A CN202310424776A CN116600198A CN 116600198 A CN116600198 A CN 116600198A CN 202310424776 A CN202310424776 A CN 202310424776A CN 116600198 A CN116600198 A CN 116600198A
- Authority
- CN
- China
- Prior art keywords
- focal length
- focus
- region
- area
- recommendation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000011521 glass Substances 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000002789 length control Methods 0.000 title claims abstract description 35
- 239000004984 smart glass Substances 0.000 claims description 37
- 230000006870 function Effects 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 14
- 230000033001 locomotion Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004458 analytical method Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 5
- 238000009432 framing Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 230000001413 cellular effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 241000272778 Cygnus atratus Species 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000033228 biological regulation Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 241000526657 Microchloa Species 0.000 description 1
- 241000422846 Sequoiadendron giganteum Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000032798 delamination Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Studio Devices (AREA)
Abstract
The disclosure provides a focal length control method and device of intelligent glasses, electronic equipment and a storage medium. The disclosure relates to the field of intelligent mobile terminals, in particular to the technical field of intelligent glasses. The specific implementation scheme is as follows: acquiring image data acquired by a camera of the intelligent glasses; identifying the image data and determining a first area and a second area; outputting a focus recommendation prompt based on the first region and the second region; acquiring a focus adjustment instruction input based on a focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of a current focus; and adjusting the current focal length to be the target focal length based on the focal length adjustment instruction. According to the scheme of the present disclosure, the focal length of the intelligent glasses camera can be intelligently adjusted, so that the accuracy of image recognition and the definition of image shooting are improved.
Description
Technical Field
The disclosure relates to the field of intelligent mobile terminals, in particular to the technical field of intelligent glasses.
Background
With the continuous development of intelligent devices, the functional demands of people on intelligent glasses are gradually increased. However, smart glasses have few operable keys due to the specificity of their own hardware architecture. When intelligent glasses are used for image recognition or image shooting, an automatic focusing algorithm is generally adopted to realize image focusing. Thus, the accuracy of image recognition and the sharpness of image capturing are low.
Disclosure of Invention
The disclosure provides a focal length control method and device of intelligent glasses, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a focal length control method of smart glasses, including:
acquiring image data acquired by a camera of the intelligent glasses;
identifying the image data and determining a first area and a second area;
outputting a focus recommendation prompt based on the first region and the second region;
acquiring a focus adjustment instruction input based on a focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of a current focus;
and adjusting the current focal length to be the target focal length based on the focal length adjustment instruction.
According to a second aspect of the present disclosure, there is provided a focal length control device of smart glasses, including:
the first acquisition module is used for acquiring image data acquired by a camera of the intelligent glasses;
the determining module is used for identifying the image data and determining a first area and a second area;
the output module is used for outputting a focal length recommendation prompt based on the first area and the second area;
the second acquisition module is used for acquiring a focal length adjustment instruction input based on the focal length recommendation prompt, wherein the focal length adjustment instruction is used for indicating adjustment of the current focal length;
And the first adjusting module is used for adjusting the current focal length to be the target focal length based on the focal length adjusting instruction.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor;
a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a method according to any one of the embodiments of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program stored on a storage medium, which when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
According to the scheme of the present disclosure, the focal length of the intelligent glasses camera can be intelligently adjusted, so that the accuracy of image recognition and the definition of image shooting are improved.
The foregoing summary is for the purpose of the specification only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will become apparent by reference to the drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
Fig. 1 is a flow diagram of a method of controlling focal length of smart glasses according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a process of determining a first region and a second region according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of blur identification of a planar image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a blurred region and a clear region of a close-up according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of blurred and clear regions of a mid-scene in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a blurred and a clear region of a perspective in accordance with an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a process for adjusting focus based on gesture status in accordance with an embodiment of the present disclosure;
FIG. 8 is a second schematic diagram of a process for adjusting focus based on gesture status according to an embodiment of the present disclosure;
fig. 9 is a schematic structural view of a focal length control device of smart glasses according to an embodiment of the present disclosure;
Fig. 10 is a schematic view of a scenario of a focal length control method of smart glasses according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device for implementing a focal length control method of smart glasses according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or elements. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
In the related art, because interaction means of some intelligent devices are limited, situations of low image recognition accuracy and low image shooting definition can occur. For example, the main input mode of the smart glasses is a camera, and the interaction mode is not convenient enough, so that the recognition and shooting effect of the image are not good. The main reason for this is that the input of the camera is regional, and the result of image recognition is obtained based on the input of the camera. However, in reality, the input scene may be complex, and the smart glasses lack the cooperation of the corresponding interaction method.
In the related art, a user wears the smart glasses in front of eyes, and it is difficult to perform related operations by hand during image recognition and image capturing because of few operation keys of the smart glasses. Therefore, in the processes of image recognition and image shooting, auxiliary operation capability needs to be increased, so that focal length adjustment of the camera is realized. In the related art, the accuracy of image recognition and the definition of image shooting are generally improved by an automatic focusing mode. However, autofocus can only meet focus adjustment in part of the scene. When a plurality of layers appear in the scene, the automatic focusing can not differentially focus a certain area in the scene. Therefore, auto-focusing has problems of small scene coverage and incapability of differentiation.
In order to at least partially solve one or more of the above problems and other potential problems, the present disclosure proposes a focus adjustment method for picture recognition and image photographing in smart glasses, which increases interaction of the smart glasses by inputting voice instructions or gesture instructions. Based on the voice command or the gesture command, the focal length of the intelligent glasses camera is adjusted, the adjustability of the focal length is enriched, the focal length control selectivity is improved, and the accuracy of image recognition and the definition of image shooting are improved. Meanwhile, the complexity of operation during image recognition or image shooting can be reduced, and therefore the use experience of a user is improved.
An embodiment of the disclosure provides a focal length control method of an intelligent glasses, and fig. 1 is a schematic flow chart of the focal length control method of the intelligent glasses according to an embodiment of the disclosure, where the focal length control method of the intelligent glasses may be applied to a focal length control device of the intelligent glasses. The focal length control device of the intelligent glasses is located in electronic equipment such as the intelligent glasses. In some possible implementations, the focal length control method of the smart glasses may also be implemented by a manner that the processor invokes computer readable instructions stored in the memory. As shown in fig. 1, the focal length control method of the smart glasses includes:
S101: acquiring image data acquired by a camera of the intelligent glasses;
s102: identifying the image data and determining a first area and a second area;
s103: outputting a focus recommendation prompt based on the first region and the second region;
s104: acquiring a focus adjustment instruction input based on a focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of a current focus;
s105: and adjusting the current focal length to be the target focal length based on the focal length adjustment instruction.
In the embodiment of the disclosure, image data acquired by a camera of the intelligent glasses is acquired. The camera can be an infrared camera and also can be a light-sensitive camera. The above is merely exemplary and is not intended to limit all possible types of camera head inclusion, but is not intended to be exhaustive.
In the embodiment of the disclosure, image data acquired by a camera of the intelligent glasses is acquired. The image data may be a still picture, or may be an image of a scene in a dynamic scene, such as a small animal playing in a zoo, or a pedestrian in a mall. The above is merely exemplary and is not intended to limit the total possible content of the image data, but is not intended to be exhaustive.
In the embodiment of the disclosure, image data is identified, and a first area and a second area are determined. Wherein the recognition is a recognition performed by a fuzzy recognition algorithm for recognizing the first region and the second region in the image data. Specifically, the first region is a blurred region, and the second region is a clear region. Here, the blur recognition algorithm is not limited, and may be any one that can realize blur recognition processing of image data.
In the embodiment of the disclosure, a focus recommendation prompt is output based on the first region and the second region. The focus recommendation prompt may be a recommendation prompt text, such as "whether focus adjustment is performed", "focus adjustment is recommended". The focus recommendation cue may also include specific values such as "recommended focus adjustment 60cm", "recommended focus adjustment 30cm". The focus recommendation prompt may also be a focus adjustment scheme, such as "focus adjustment scheme one", "focus adjustment scheme two". The above is merely exemplary and is not intended to be limiting of the total possible content that may be included in the focus distance recommendation cues, but is not intended to be exhaustive.
Illustratively, a swan in a swan lake collected by a camera of the smart glasses is obtained, the swan in the swan lake is identified, and a first area and a second area are determined. The swan in this first area is obscured by the farther position. The black swan of the second area is clear due to the closer distance. A focus recommendation alert is output based on the first region and the second region, where the focus recommendation alert may include explicit focus recommendation content. For example, the focus recommendation cue is described as "you want to take a clear image of swan lake", where "focus recommendation cue" may be described by text or image or voice. The intelligent glasses acquire focus adjustment instructions input by a user based on focus recommendation prompts, for example, the user agrees to the focus recommendation prompts through gesture or voice representation; and adjusting the current focal length based on the focal length adjustment instruction to obtain a clear swan lake image.
In the embodiment of the disclosure, a focal length adjustment instruction input based on a focal length recommendation prompt is acquired, and the focal length adjustment instruction is used for indicating adjustment of a current focal length. The focus adjustment instruction is determined based on the focus recommendation cue. The focus adjustment instruction may be a voice instruction, where the voice instruction is to be processed by a voice recognition module of the smart glasses, so as to be used for indicating adjustment of the current focus. The focal length adjustment instruction may be a gesture instruction, where the gesture instruction is to be recognized by a gesture recognition module of the smart glasses, and invokes a gesture in a gesture library to compare, so as to indicate adjustment of the current focal length. The voice recognition module and the gesture recognition module are an integral part of the smart glasses device. The specific contents of the voice recognition module and the gesture recognition module will be described in detail later, and will not be described herein.
In the embodiment of the disclosure, the current focal length is adjusted to be the target focal length based on the focal length adjustment instruction. The target focal length is a focal length desired by the user, which is determined based on the focal length adjustment instruction. The intelligent glasses can determine the target focal length according to the acquired focal length adjustment instruction. The target focal length may be a specific focal length value; the target focal length may also be a set of focal length value ranges.
According to the technical scheme, image data collected by the cameras of the intelligent glasses are obtained; identifying the image data and determining a first area and a second area; outputting a focus recommendation prompt based on the first region and the second region; acquiring a focus adjustment instruction input based on a focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of a current focus; and adjusting the current focal length to be the target focal length based on the focal length adjustment instruction. By the focal length adjusting method of the intelligent glasses, the problem of interaction singleness of the intelligent glasses is solved, the adjustability of the focal length is enriched, the selectivity of focal length control is increased, the accuracy of image identification and image searching and the definition of image shooting are improved, and therefore the use experience of users of the intelligent glasses is improved.
FIG. 2 is a schematic diagram showing a process of determining a first region and a second region, and as shown in FIG. 2, performing blur recognition on image data to obtain K regions; determining a region with definition smaller than a first preset threshold value in the K regions as a first region; determining a region with definition larger than a second preset threshold value in the K regions as a second region; k is an integer greater than 1, and the second preset threshold is not less than the first preset threshold.
In some embodiments, the image data is blur-identified, resulting in K regions. Specifically, the image data is divided into m×n lattices. Where M represents the number of columns that divide the image and N represents the number of rows that divide the image. And carrying out fuzzy recognition on the image data to obtain K areas, wherein the K areas comprise a first area and a second area. Where k=m×n, e.g., m=4, n=3, k=12, K, M, N are integers greater than 1.
In some embodiments, the first preset threshold is used to determine the first region and the second preset threshold is used to determine the second region. The first preset threshold value and the second preset threshold value may be default values or numerical ranges adjusted according to scenes. Under the same scene, the second preset threshold value is not smaller than the first preset threshold value.
In some embodiments, blur recognition is used to identify the first region and the second region in the image data using a blur recognition algorithm. The fuzzy recognition algorithm may be any algorithm that can achieve the above object.
In some embodiments, determining a region of the K regions having a sharpness less than a first preset threshold as a first region; and determining the area with the definition larger than a second preset threshold value in the K areas as a second area. The first area is a blurring area, and the reasons for specifically causing the blurring area and the clear area of the image data acquired by the camera include a plurality of possibilities; for example, the image is dynamic, with people or animals moving, causing blurring of the image; for another example, an image that includes both far and near fields may also result in an image that has both blurred and sharp areas. Therefore, adjusting the focal length for different scenes is an important means to improve the image recognition accuracy and the image capturing sharpness.
In some embodiments, a kitten circle image acquired by a camera of the smart glasses is acquired, the kitten circle image is identified, and a first area and a second area are determined. At the current focal length, the area where the small grass and big tree are located where the background around the kittens is stationary is clear and the area where the kittens are located is blurred. Based on whether the focal length is required to be adjusted or not, the kittens correspond to focal length 1, the background corresponds to focal length 2, and after the intelligent glasses acquire a focal length adjustment instruction of focal length 1 selected based on focal length recommendation prompt input, the intelligent glasses adjust the current focal length based on the focal length adjustment instruction to obtain clear kittens turn images.
According to the technical scheme, fuzzy recognition is carried out on image data to obtain K areas; determining a region with definition smaller than a first preset threshold value in the K regions as a first region; and determining the area with the definition larger than a second preset threshold value in the K areas as a second area. The method and the device can identify the fuzzy area and the clear area under multiple scenes, and obtain the focus adjustment instruction based on the fuzzy area and the clear area, so that the accuracy of intelligent glasses image identification and the definition of image shooting are improved.
In some embodiments, identifying the image data, determining the first region and the second region may include: respectively taking the central point of each area as an automatic focusing point; the sharpness of each region is determined based on the automatic focus of each region.
In some embodiments, each region refers to each of the K regions.
In some embodiments, the sharpness of each region is determined based on the automatic focus of each region. The sharpness refers to whether there is a blurred region in the image data. Fig. 3 shows a schematic diagram of blur recognition of a planar image, in which, as shown in fig. 3, a camera is framed into 4×3 areas, and whether or not each area is a blurred area is determined based on the center point of the area as an auto-focus point. The clear area with a focal length of 1.0 can be obtained in the view finding of the lens. For example, if 1 region in the view of the lens is a blurred region, the focal length of the blurred region may be adjusted so that the blurred region is adjusted to be a clear region, and the images corresponding to the other 11 clear regions are combined to obtain a complete clear image.
According to the technical scheme, the center point of each area is used as an automatic focusing point; the definition of each region is determined based on the automatic focusing point of each region, so that a first region and a second region in the image region are obtained more accurately, a more reasonable and accurate focal length recommendation prompt is obtained based on the first region and the second region, and the accuracy of intelligent glasses image recognition and the definition of image shooting are improved.
In the embodiment of the disclosure, under the condition that the view finding area of the camera is in the first state based on the respective number of the first area and the second area, determining a first focal length corresponding to the first area and a second focal length corresponding to the second area, wherein the first focal length is a focal length of the first area in the clear state, and the second focal length is a focal length of the second area in the clear state; determining a first focus recommendation cue based on the first focus and the second focus, the focus recommendation cue comprising the first focus recommendation cue; and outputting a first focal length recommendation prompt.
In some embodiments, the first state refers to that the image under the lens acquired by the camera is hierarchical, i.e. a blurred region may exist. Illustratively, the image under the lens is a swan lake in which the swan is at a distance, the black swan is at a near, and a wetland is located in the center of the swan lake. The far view, the near view and the intermediate position in the scene are hierarchical.
Fig. 4, 5 and 6 show schematic diagrams of blurred and clear areas of a far, near and medium view, respectively. The image under the lens is a swan lake, tourists shoot in front of the swan lake, the background is swan in the swan lake, and a small pavilion is arranged in the swan lake at a distance. The lens scene has 3 focal segments, namely far, near and middle. Under the condition of normal light, the face of the foreground is clear, and a distant pavilion is fuzzy. Aiming at the image under the lens, the clear image with the face under the foreground as the default focal length and the blurred image with the far pavilion as the default focal length can be obtained. And determining a blurred region, a clear region and a clear boundary of the blurred region and the clear region of the image through a blurred recognition algorithm.
In some embodiments, when shooting is performed, a blurred region in the lens is detected, and the camera automatically enlarges and reduces, namely, the focal length of the blurred region is adjusted to a maximum focal length value and a minimum focal length value; selecting a plurality of clear images between the maximum focal length and the minimum focal length, and corresponding to different layers; specifically, as shown in fig. 4, the first focal segment is clear in man-made matter and blurred in swan lake; as shown in fig. 5, the second focal segment is blurred for tourists and clear in swan lakes; as shown in fig. 6, the third focal zone is obscured by both the guest and swan lake, and the distant kiosk is clear. At the 3 levels of the image, the emphasis of the different areas in the shot is different. Illustratively, the focal length recommendation prompts are a tourist focal length of 1.0, a swan lake focal length of 2.0, and a kiosk focal length of 3.0; in response to the voice command "zoom to 3.0", the camera adjusts the focal length of the image to 3.0, i.e., the kiosk is clear; in response to the voice command "zoom to 1.0", the guest is clear. The method provides a new input mode for the intelligent glasses based on the focal length adjustment of voice recognition, and solves the problem of interaction singleness of the intelligent glasses.
In some embodiments, the focal length can be adjusted by hand recognition, and in response to gesture actions before the lens, the gesture library is called for comparison to obtain corresponding gesture instructions, and the focal length is adjusted for the image under the lens based on the gesture instructions.
According to the technical scheme, under the condition that the view finding area of the camera is in the first state based on the respective quantity of the first area and the second area, a first focal length corresponding to the first area and a second focal length corresponding to the second area are determined, wherein the first focal length is a focal length of the first area in the clear state, and the second focal length is a focal length of the second area in the clear state; determining a first focus recommendation cue based on the first focus and the second focus, the focus recommendation cue comprising the first focus recommendation cue; and outputting a first focal length recommendation prompt. The intelligent glasses can increase the scene coverage of the focal length adjustment of the intelligent glasses, and are beneficial to improving the accuracy of image recognition and the definition of image shooting of the intelligent glasses under different scenes.
In an embodiment of the disclosure, the focus recommendation prompt includes focus information for each object in the scenic spot. Acquiring the equipment motion state of the intelligent glasses under the condition that the view finding area of the camera is in the second state based on the respective quantity of the first area and the second area; determining a second focal length recommendation prompt based on change information of the motion state of the intelligent glasses in a preset time period, wherein the focal length recommendation prompt comprises the second focal length recommendation prompt; and outputting a second focal length recommendation prompt.
In some embodiments, the second state refers to the image captured by the camera under the lens being non-hierarchical, i.e., no blurred regions are present. Illustratively, a picture hanging on a wall is in a blurred region if the image is in the blurred region because of the plane of the image under the lens; if the image has a clear region, the image is in the clear region.
In some embodiments, a picture hanging on a wall is photographed by a camera of the smart glasses, and the camera is farther from the picture from the point a to the point B. The picture is farther from the camera than at point a when the user is at point B. Therefore, the focal length of the camera needs to be adjusted; since the drawing is planar, there is no delamination in the focal length of the drawing. The object of shooting is an entire picture, and when the intelligent glasses detect that the camera moves from the point a close to the picture to the point B far away from the picture, the intelligent glasses obtain a second focus recommendation prompt, such as "whether to start the focus adjustment scheme 4", based on the motion state change information of the camera in a preset time. For example, the zoom-in scheme 4 is described as framing the entire picture. After the intelligent glasses obtain the instruction for starting the focal length adjustment scheme 4, the focal length adjustment scheme 4 in the database is called, and the focal length of the camera is adjusted based on the parameters of the focal length adjustment scheme 4, so that the whole picture is displayed in the image view-finding frame of the intelligent glasses.
According to the technical scheme, under the condition that the framing area of the camera is in the second state based on the respective quantity of the first area and the second area, the equipment motion state of the intelligent glasses is obtained; determining a second focal length recommendation prompt based on change information of the motion state of the intelligent glasses in a preset time period, wherein the focal length recommendation prompt comprises the second focal length recommendation prompt; and outputting a second focal length recommendation prompt. In the second state, the focal length adjusting scheme is determined according to the motion state information of the equipment, so that the flexibility of focal length control of the intelligent glasses can be improved, and the accuracy of image identification and the definition of image shooting can be improved.
In an embodiment of the disclosure, the focus recommendation prompt includes focus information for each object in the scenic spot. Acquiring a gesture state of a target object in front of a camera; analyzing and obtaining a focus adjustment instruction based on the gesture state of the target object; different gesture states correspond to different focus adjustment instructions.
In some embodiments, the target object is specifically a wearer or user of smart glasses. The determining that the gesture in front of the camera is the gesture of the target object comprises: responding to the gesture state of the camera, judging the distance between the gesture and the camera through the change of the size of an object under the lens, if the gesture is closer to the camera, the gesture is the gesture of the target person, and responding to the gesture to adjust the focal length of the camera; if the gesture is far from the camera, the gesture is not a gesture of the target person. In particular, the distance of the gesture from the camera may also be sensed by a physical device, for example: infrared sensors, lidar, etc.
In some embodiments, a gesture state of a target object in front of a camera is acquired; based on the gesture state of the target object, the analysis to obtain the focus adjustment instruction can be realized based on the gesture recognition module of the intelligent glasses. The gesture recognition module may include: the gesture acquisition module is used for acquiring gesture states of target characters in the camera; the gesture recognition module is used for recognizing the collected gesture state; and the gesture analysis module is used for analyzing the current gesture state and obtaining a focus adjustment instruction corresponding to the gesture state.
In some embodiments, different gesture states correspond to different focus adjustment instructions. For example, the "OK" gesture is an adjustment focus scheme 1, the "heart comparing" gesture is an adjustment focus of 1.0, and the "scissors" gesture is an adjustment focus of 1.0-3.0. And in response to detecting that the gesture of the target person is a 'heart comparing' gesture, the intelligent glasses call the gesture library to be compared to obtain a corresponding instruction of 'adjusting the current focal length to be 1.0', and in response to the instruction, adjusting the current focal length to be 1.0.
Fig. 7 shows a first process diagram of adjusting the focal length based on the gesture state, as shown in fig. 7, in which the user wants to adjust the focal length of the computer interface under the lens, the user enters the focus adjustment mode in response to the initial gesture, determines to adjust the focal length based on the pinch gesture, and if the pinch gesture is far from the lens, the focal length becomes larger, and if the pinch gesture is near to the lens, the focal length becomes smaller.
Fig. 8 shows a second process diagram of adjusting the focal length based on the gesture state, in which, as shown in fig. 8, the user wants to adjust the focal length of the computer interface under the lens, and in response to the initial gesture, the user enters into the focal length adjustment mode, and determines to adjust the focal length based on the pinch gesture, if the pinch gesture is far from the lens, the focal length becomes larger, and if the pinch gesture is near to the lens, the focal length becomes smaller.
In some embodiments, the preset gesture may include a static gesture and a dynamic gesture, and the preset gesture may be further used to adjust a movement direction of the camera. Different gesture states correspond to different focus adjustment instructions. Specifically, the user can enter various gestures according to requirements for adjusting the focal length.
According to the technical scheme, the gesture state of a target object in front of a camera is obtained; and analyzing and obtaining a focus adjustment instruction based on the gesture state of the target object. And obtaining a focus adjustment instruction through a preset gesture state. The diversity of intelligent glasses can be improved, and the use requirements of different users are met.
In the embodiment of the disclosure, a voice instruction input based on focus recommendation prompt is acquired, wherein the voice instruction is used for indicating pre-selected focus information; based on the voice command, the focus adjustment command is obtained through analysis.
In some implementations, the focus recommendation cues include focus information for objects in the scenic spot. Acquiring a voice instruction input based on focus recommendation prompt, wherein the voice instruction is used for indicating pre-selected focus information; based on the voice command, the analysis of the focus adjustment command can be realized based on a voice recognition module of the intelligent glasses. The voice recognition module may include: the voice acquisition module is used for acquiring voice instructions of a target person; a voice recognition module for recognizing voice instructions; and the voice analysis module is used for analyzing the current voice command and obtaining a focus adjustment command corresponding to the voice command.
In some embodiments, the focus is adjusted to 2.0 based on the voice command "adjust focus to 2.0" of the target person in response to the focus recommendation prompt.
According to the technical scheme, a voice instruction input based on focus recommendation prompt is obtained, and the voice instruction is used for indicating pre-selected focus information; based on the voice command, the focus adjustment command is obtained through analysis. And obtaining a focus adjustment instruction through the voice instruction. The diversity of intelligent glasses can be improved, and the use requirements of different users are met.
In some embodiments, the focal length control method of the smart glasses may further include: in the case that the function of the focus recommendation prompt is turned off, responding to detection of an initial state gesture, and entering a focus adjustment mode; in a focal length adjustment mode, adjusting the focal length of the camera according to the detected preset gesture; in response to detecting the end state gesture, the focus adjustment mode is exited.
In some embodiments, the function of the focus recommendation prompt may be turned off or on according to the user's needs. After the function of focus prompt is closed, a user can adjust the focus according to different requirements.
In some embodiments, dividing the under-lens viewing area into K block areas, and determining whether the K block areas have blurred areas; if the fuzzy area does not exist, the user can close the function of the focus prompt, and the focus is adjusted according to the requirements of the user through gestures. Specifically, a user enters a focus adjustment state through a preset first gesture; the current focal length is adjusted by presetting a second gesture; and the user exits the focus adjustment state through a preset third gesture.
In some embodiments, dividing the under-lens viewing area into K block areas, and determining whether the K block areas have blurred areas; if the fuzzy area exists, the user can start the function of focus prompt, and based on focus recommendation prompt, the focus is adjusted through voice or gestures.
According to the technical scheme, under the condition that the function of the focus recommendation prompt is closed, a focus adjustment mode is entered in response to detection of an initial state gesture; in a focal length adjustment mode, adjusting the focal length of the camera according to the detected preset gesture; in response to detecting the end state gesture, the focus adjustment mode is exited. The simple focal length adjustment scheme can meet the requirements of users in different scenes, and can adjust the focal length of the intelligent glasses camera, so that the accuracy of image identification and the definition of image shooting are improved.
It should be understood that the schematic diagrams shown in fig. 2, 3, 4, 5, 6, 7, and 8 are merely exemplary and not limiting, and that they are scalable, and that those skilled in the art may make various obvious changes and/or substitutions based on the examples of fig. 2, 3, 4, 5, 6, 7, and 8, and the resulting technical solutions still fall within the scope of the disclosure of the embodiments of the present disclosure.
The embodiment of the disclosure provides a focal length control device of intelligent glasses, as shown in fig. 9, the focal length control device of intelligent glasses may include: the first acquiring module 901 is configured to acquire image data acquired by a camera of the smart glasses; a determining module 902, configured to identify the image data and determine a first area and a second area; an output module 903, configured to output a focus recommendation prompt based on the first region and the second region; a second obtaining module 904, configured to obtain a focal length adjustment instruction input based on a focal length recommendation prompt, where the focal length adjustment instruction is used to instruct adjustment of a current focal length; the first adjustment module 905 is configured to adjust the current focal length to be the target focal length based on the focal length adjustment instruction.
In some embodiments, the determining module 902 includes: the first acquisition submodule is used for carrying out fuzzy recognition on the image data to obtain K areas; the first determining submodule is used for determining a region with definition smaller than a first preset threshold value in the K regions as a first region; the second determining submodule is used for determining a region with the definition larger than a second preset threshold value in the K regions as a second region; k is an integer greater than 1, and the second preset threshold is not less than the first preset threshold.
In some embodiments, the determining module 902 further comprises: a third determining sub-module, configured to respectively take a center point of each area as an automatic focusing point; and a fourth determination sub-module for determining the sharpness of each region based on the auto-focus of each region.
In some embodiments, the output module 903 includes: a fifth determining submodule, configured to determine, when determining that the view-finding area of the camera is in the first state based on the number of the first area and the second area, a first focal length corresponding to the first area and a second focal length corresponding to the second area, where the first focal length is a focal length of the first area in the clear state, and the second focal length is a focal length of the second area in the clear state; a sixth determination submodule for determining a first focus recommendation hint based on the first focus and the second focus, the focus recommendation hint including the first focus recommendation hint; and the first output sub-module is used for outputting a first focal length recommendation prompt.
In some embodiments, the output module 903 further comprises: the second acquisition submodule is used for acquiring the equipment motion state of the intelligent glasses under the condition that the framing area of the camera is in the second state based on the respective quantity of the first area and the second area; a seventh determining submodule, configured to determine a second focal length recommendation prompt based on change information of a motion state of the device of the smart glasses in a preset time period, where the focal length recommendation prompt includes the second focal length recommendation prompt; and the second output sub-module is used for outputting a second focal length recommendation prompt.
In some embodiments, the focus recommendation prompt includes focus information of each object in the scenic spot, and the second obtaining module 904 includes: the third acquisition sub-module is used for acquiring the gesture state of the target object in front of the camera; the first analysis submodule is used for analyzing and obtaining a focus adjustment instruction based on the gesture state of the target object; different gesture states correspond to different focus adjustment instructions.
In some embodiments, the focus recommendation prompt includes focus information of each object in the scenic spot, and the second obtaining module 904 further includes: the fourth acquisition sub-module is used for acquiring a voice instruction input based on focus recommendation prompt, wherein the voice instruction is used for indicating pre-selected focus information; and the second analysis submodule is used for analyzing and obtaining the focus adjustment instruction based on the voice instruction.
In some embodiments, the focal length control device of the smart glasses further comprises: an entry control module 906 (not shown in fig. 9) for entering a focus adjustment mode in response to detecting an initial state gesture in the event that the function of the focus recommendation prompt is turned off; a second adjustment module 907 (not shown in fig. 9) for adjusting the focal length of the camera according to the detected preset gesture in the focal length adjustment mode; an exit control module 908 (not shown in fig. 9) for exiting the focus adjustment mode in response to detecting an end state gesture.
It should be understood by those skilled in the art that the functions of each processing module in the focal length control device of the smart glasses according to the embodiments of the present disclosure may be understood by referring to the foregoing description of the focal length control method of the smart glasses, each processing module in the focal length control device of the smart glasses according to the embodiments of the present disclosure may be implemented by an analog circuit that implements the functions of the embodiments of the present disclosure, or may be implemented by executing software that implements the functions of the embodiments of the present disclosure on an electronic device.
According to the focal length control device of the intelligent glasses, the focal length of the camera of the intelligent glasses can be intelligently adjusted through gestures or voice instructions, and the accuracy of image recognition and the definition of image shooting can be improved.
The embodiment of the disclosure provides a scene schematic diagram of focal length control of intelligent glasses, as shown in fig. 10.
As described above, the focal length control method of the smart glasses provided by the embodiment of the present disclosure is applied to an electronic device. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses.
In particular, the electronic device may specifically perform the following operations:
acquiring image data acquired by a camera of the intelligent glasses;
identifying the image data and determining a first area and a second area;
outputting a focus recommendation prompt based on the first region and the second region;
acquiring a focus adjustment instruction input based on a focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of a current focus;
and adjusting the current focal length to be the target focal length based on the focal length adjustment instruction.
The image data collected by the cameras of the intelligent glasses can be obtained from an image data source. The image data source may be various forms of data storage devices such as a laptop computer, desktop computer, workstation, personal digital assistant, server, blade server, mainframe computer, and other suitable computer. The image data source may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing devices. Furthermore, the image data source and the user terminal may be the same device.
It should be understood that the scene diagram shown in fig. 10 is merely illustrative and not restrictive, and that various obvious changes and/or substitutions may be made by one skilled in the art based on the example of fig. 10, and the resulting technical solutions still fall within the scope of the disclosure of the embodiments of the present disclosure.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, a computer program product.
Fig. 11 illustrates a schematic block diagram of an example electronic device 1100 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the apparatus 1100 includes a computing unit 1101 that can perform various appropriate actions and processes according to a computer program stored in a Read-Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a random access Memory (Random Access Memory, RAM) 1103. In the RAM 1103, various programs and data required for the operation of the device 1100 can also be stored. The computing unit 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An Input/Output (I/O) interface 1105 is also connected to bus 1104.
Various components in device 1100 are connected to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, etc.; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108, such as a magnetic disk, optical disk, etc.; and a communication unit 1109 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1101 include, but are not limited to, a central processing unit CPU, a graphics processing unit (Graphics Processing Unit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various computing units running machine learning model algorithms, digital signal processors (Digital Signal Processor, DSPs), and any suitable processors, controllers, microcontrollers, etc. The calculation unit 1101 performs the respective methods and processes described above, for example, a focal length control method of the smart glasses. For example, in some embodiments, the method of focal length control of smart glasses may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1108. In some embodiments, some or all of the computer programs may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When a computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the above-described focal length control method of the smart glasses may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the focal length control method of the smart glasses by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuitry, field programmable gate arrays (Field Programmable Gate Array, FPGAs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), application-specific standard products (ASSPs), system On Chip (SOC), complex programmable logic devices (Complex Programmable Logic Device, CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a random access Memory, a read-Only Memory, an erasable programmable read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (Compact Disk Read Only Memory, CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., cathode Ray Tube (CRT) or liquid crystal display (Liquid Crystal Display, LCD) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN) and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (19)
1. A focal length control method of intelligent glasses, comprising:
acquiring image data acquired by a camera of the intelligent glasses;
identifying the image data and determining a first area and a second area;
outputting a focus recommendation prompt based on the first region and the second region;
acquiring a focus adjustment instruction input based on the focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of a current focus;
And adjusting the current focal length to be a target focal length based on the focal length adjustment instruction.
2. The method of claim 1, wherein the identifying the image data, determining a first region and a second region, comprises:
performing fuzzy recognition on the image data to obtain K areas;
determining a region with definition smaller than a first preset threshold value in the K regions as the first region;
determining a region with definition larger than a second preset threshold value in the K regions as the second region; k is an integer greater than 1, and the second preset threshold value is not smaller than the first preset threshold value.
3. The method of claim 2, wherein the identifying the image data, determining a first region and a second region, further comprises:
respectively taking the central point of each area as an automatic focusing point;
the sharpness of each region is determined based on the auto-focus of each region.
4. The method of claim 1, wherein the outputting a focus recommendation alert based on the first region and the second region comprises:
determining a first focal length corresponding to the first region and a second focal length corresponding to the second region when determining that the view finding region of the camera is in a first state based on the respective numbers of the first region and the second region, wherein the first focal length is a focal length of the first region in a clear state, and the second focal length is a focal length of the second region in a clear state;
Determining a first focus recommendation cue based on the first focus and the second focus, the focus recommendation cue comprising the first focus recommendation cue;
and outputting the first focal length recommendation prompt.
5. The method of claim 1, wherein the outputting a focus recommendation alert based on the first region and the second region comprises:
acquiring the equipment motion state of the intelligent glasses under the condition that the framing area of the camera is in a second state based on the respective quantity of the first area and the second area;
determining a second focal length recommendation prompt based on the change information of the motion state of the equipment of the intelligent glasses in a preset time period, wherein the focal length recommendation prompt comprises the second focal length recommendation prompt;
and outputting the second focal length recommendation prompt.
6. The method of claim 4 or 5, wherein the focus recommendation cue includes focus information for each object in the viewing area, wherein the obtaining focus adjustment instructions based on the focus recommendation cue input comprises:
acquiring a gesture state of a target object in front of the camera;
analyzing and obtaining the focus adjustment instruction based on the gesture state of the target object; different gesture states correspond to different focus adjustment instructions.
7. The method of claim 4 or 5, wherein the focus recommendation cue includes focus information for each object in the viewing area, wherein the obtaining focus adjustment instructions based on the focus recommendation cue input comprises:
acquiring a voice instruction input based on the focus recommendation prompt, wherein the voice instruction is used for indicating pre-selected focus information;
and analyzing and obtaining the focus adjustment instruction based on the voice instruction.
8. The method of claim 1, further comprising:
under the condition that the function of the focus recommendation prompt is closed, responding to detection of an initial state gesture, and entering a focus adjustment mode;
in the focal length adjustment mode, adjusting the focal length of the camera according to the detected preset gesture;
in response to detecting the end state gesture, the focus adjustment mode is exited.
9. A focal length control device for smart glasses, comprising:
the first acquisition module is used for acquiring image data acquired by a camera of the intelligent glasses;
the determining module is used for identifying the image data and determining a first area and a second area;
the output module is used for outputting a focal length recommendation prompt based on the first area and the second area;
The second acquisition module is used for acquiring a focus adjustment instruction input based on the focus recommendation prompt, wherein the focus adjustment instruction is used for indicating adjustment of the current focus;
and the first adjusting module is used for adjusting the current focal length to be a target focal length based on the focal length adjusting instruction.
10. The apparatus of claim 9, wherein the means for determining comprises:
the first acquisition submodule is used for carrying out fuzzy recognition on the image data to obtain K areas;
a first determining submodule, configured to determine, as the first region, a region with a sharpness smaller than a first preset threshold value in the K regions;
a second determining submodule, configured to determine, as the second region, a region whose sharpness is greater than a second preset threshold value in the K regions; k is an integer greater than 1, and the second preset threshold value is not smaller than the first preset threshold value.
11. The apparatus of claim 10, wherein the means for determining further comprises:
a third determining sub-module, configured to respectively take a center point of each area as an automatic focusing point;
and a fourth determination sub-module for determining the sharpness of each region based on the auto-focus of each region.
12. The apparatus of claim 9, wherein the output module comprises:
a fifth determining submodule, configured to determine, when determining that a view-finding area of the camera is in a first state based on the number of the first area and the second area, a first focal length corresponding to the first area and a second focal length corresponding to the second area, where the first focal length is a focal length of the first area in a clear state, and the second focal length is a focal length of the second area in a clear state;
a sixth determination submodule configured to determine a first focus recommendation hint based on the first focus and the second focus, the focus recommendation hint including the first focus recommendation hint;
and the first output sub-module is used for outputting the first focal length recommendation prompt.
13. The apparatus of claim 9, wherein the output module comprises:
a second obtaining sub-module, configured to obtain a device motion state of the smart glasses when determining that a view area of the camera is in a second state based on the number of the first area and the second area;
a seventh determining submodule, configured to determine a second focal length recommendation prompt based on change information of the motion state of the device in a preset time period of the smart glasses, where the focal length recommendation prompt includes the second focal length recommendation prompt;
And the second output sub-module is used for outputting the second focal length recommendation prompt.
14. The apparatus of claim 12 or 13, wherein the focus recommendation cue includes focus information for each object in the viewing area, wherein the second acquisition module comprises:
the third acquisition sub-module is used for acquiring the gesture state of the target object in front of the camera;
the first analysis submodule is used for analyzing and obtaining the focus adjustment instruction based on the gesture state of the target object; different gesture states correspond to different focus adjustment instructions.
15. The apparatus of claim 12 or 13, wherein the focus recommendation cue includes focus information for each object in the viewing area, wherein the second acquisition module comprises:
a fourth obtaining sub-module, configured to obtain a voice command input based on the focus recommendation prompt, where the voice command is used to indicate pre-selected focus information;
and the second analysis submodule is used for analyzing and obtaining the focus adjustment instruction based on the voice instruction.
16. The apparatus of claim 9, further comprising:
the entering control module is used for responding to the detection of the gesture in the initial state and entering a focus adjustment mode under the condition that the function of the focus recommendation prompt is closed;
The second adjusting module is used for adjusting the focal length of the camera according to the detected preset gesture in the focal length adjusting mode;
and the exit control module is used for responding to detection of the ending state gesture and exiting the focus adjustment mode.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program stored on a storage medium, which, when executed by a processor, implements the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310424776.5A CN116600198A (en) | 2023-04-19 | 2023-04-19 | Focal length control method and device of intelligent glasses, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310424776.5A CN116600198A (en) | 2023-04-19 | 2023-04-19 | Focal length control method and device of intelligent glasses, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116600198A true CN116600198A (en) | 2023-08-15 |
Family
ID=87610623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310424776.5A Pending CN116600198A (en) | 2023-04-19 | 2023-04-19 | Focal length control method and device of intelligent glasses, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116600198A (en) |
-
2023
- 2023-04-19 CN CN202310424776.5A patent/CN116600198A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210264133A1 (en) | Face location tracking method, apparatus, and electronic device | |
US11081137B2 (en) | Method and device for processing multimedia information | |
US10440284B2 (en) | Determination of exposure time for an image frame | |
TW202042175A (en) | Image processing method and apparatus, electronic device and storage medium | |
CN112149636A (en) | Method, apparatus, electronic device and storage medium for detecting target object | |
CN113747085A (en) | Method and device for shooting video | |
TW202029125A (en) | Method, apparatus and electronic device for image processing and storage medium thereof | |
US11551465B2 (en) | Method and apparatus for detecting finger occlusion image, and storage medium | |
JP6157165B2 (en) | Gaze detection device and imaging device | |
CN113873166A (en) | Video shooting method and device, electronic equipment and readable storage medium | |
CN108259767B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108495038B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN114173061B (en) | Multi-mode camera shooting control method and device, computer equipment and storage medium | |
CN116600198A (en) | Focal length control method and device of intelligent glasses, electronic equipment and storage medium | |
CN115883958A (en) | Portrait shooting method | |
CN110910304B (en) | Image processing method, device, electronic equipment and medium | |
CN115812308B (en) | Shooting control method and device, intelligent equipment and computer readable storage medium | |
KR20140134844A (en) | Method and device for photographing based on objects | |
CN114071024A (en) | Image shooting method, neural network training method, device, equipment and medium | |
KR102094944B1 (en) | Method for eye-tracking and terminal for executing the same | |
JP2017228873A (en) | Image processing apparatus, imaging device, control method and program | |
CN118675174A (en) | Image detection method, device, apparatus, storage medium, and program product | |
CN116980758A (en) | Video blurring method, electronic device, storage medium and computer program | |
JP2022054247A (en) | Information processing device and information processing method | |
CN113194247A (en) | Photographing method, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |