CN108769527B - Scene identification method and device and terminal equipment - Google Patents

Scene identification method and device and terminal equipment Download PDF

Info

Publication number
CN108769527B
CN108769527B CN201810605325.0A CN201810605325A CN108769527B CN 108769527 B CN108769527 B CN 108769527B CN 201810605325 A CN201810605325 A CN 201810605325A CN 108769527 B CN108769527 B CN 108769527B
Authority
CN
China
Prior art keywords
scene type
focal length
image content
length value
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810605325.0A
Other languages
Chinese (zh)
Other versions
CN108769527A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201810605325.0A priority Critical patent/CN108769527B/en
Publication of CN108769527A publication Critical patent/CN108769527A/en
Application granted granted Critical
Publication of CN108769527B publication Critical patent/CN108769527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

The application is applicable to the technical field of communication, and provides a scene identification method, a scene identification device and terminal equipment, wherein the scene identification method comprises the following steps: acquiring image content of a camera preview interface of the terminal equipment; identifying the image content to obtain a primary judgment result of the scene type; acquiring a focal length value of a camera; judging whether the focal length value is larger than a preset focal length value, wherein the preset focal length value is used for shooting a real scene type corresponding to the preliminary judgment result; and if the initial judgment result is larger than the preset focal length value, judging the scene type corresponding to the initial judgment result as a real scene type. By the method, the accuracy of the judgment result of the scene type can be improved.

Description

Scene identification method and device and terminal equipment
Technical Field
The present application belongs to the field of communications technologies, and in particular, to a scene recognition method, a scene recognition apparatus, a terminal device, and a computer-readable storage medium.
Background
In order to seize the market, manufacturers of terminal devices (such as mobile phones and tablet computers) and digital cameras continuously update the photographing function of the terminal devices. For example, a terminal device or a digital camera provides a plurality of scene modes, and each scene mode sets corresponding scene parameters in advance, where the scene parameters include aperture, shutter, focal length, and the like.
However, in actual shooting, if the user needs to select a scene mode matching the current shooting content from the multiple scene modes provided, the user will spend too much time and miss the best shooting opportunity.
Disclosure of Invention
In view of this, embodiments of the present application provide a scene recognition method, a device, and a terminal device, so as to solve the problem in the prior art that a user needs to manually select a scene mode for photographing, which results in excessive time consumption.
A first aspect of an embodiment of the present application provides a scene identification method, including:
acquiring image content of a camera preview interface of the terminal equipment;
identifying the image content to obtain a primary judgment result of the scene type;
acquiring a focal length value of a camera, wherein the focal length value of the camera is a focal length value adopted for shooting image content of a camera preview interface;
judging whether the focal length value is larger than a preset focal length value, wherein the preset focal length value is used for shooting a real scene type corresponding to the preliminary judgment result;
and if the initial judgment result is larger than the preset focal length value, judging the scene type corresponding to the initial judgment result as a real scene type.
A second aspect of an embodiment of the present application provides a scene recognition apparatus, including:
the terminal equipment comprises an image content acquisition unit, a display unit and a display unit, wherein the image content acquisition unit is used for acquiring the image content of a camera preview interface of the terminal equipment;
the image content identification unit is used for identifying the image content to obtain a primary judgment result of the scene type;
the device comprises a focal length value acquisition unit, a focus value acquisition unit and a focus value display unit, wherein the focal length value acquisition unit is used for acquiring a focal length value of a camera, and the focal length value of the camera is a focal length value adopted for shooting image content of a camera preview interface;
a scene type preliminary judgment unit, configured to judge whether the focal length value is greater than a preset focal length value, where the preset focal length value is a focal length value used for shooting a real scene type corresponding to the preliminary judgment result;
and the scene type determining unit is used for determining the scene type corresponding to the preliminary determination result as the real scene type if the focus value is larger than the preset focus value.
A third aspect of the embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the scene recognition method when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the scene recognition method as described.
Compared with the prior art, the embodiment of the application has the advantages that:
the method has the advantages that the scene type corresponding to the image content can be automatically identified according to the image content, so that the user does not need to manually select the scene type, the operation steps of the user are reduced, and in addition, the scene type corresponding to the image content of the camera preview interface is judged to be the real scene type if the focal length value adopted for shooting the image content of the camera preview interface is larger than the focal length value adopted for shooting the real scene type corresponding to the image content, so that the real scene type and the pseudo scene type can be further identified, and the accuracy of the judgment result of the scene type is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a scene recognition method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another scene recognition method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a scene recognition apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of another scene recognition apparatus provided in an embodiment of the present application;
fig. 5 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the mobile terminals described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
In the discussion that follows, a mobile terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the mobile terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The mobile terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the mobile terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
The first embodiment is as follows:
fig. 1 is a schematic flowchart of a scene recognition method provided in an embodiment of the present application, which is detailed as follows:
in step S11, image content of the camera preview interface of the terminal device is acquired.
The terminal equipment of the step comprises equipment such as a mobile phone, a tablet personal computer and a digital camera. The image content of this step includes information such as pixel values and brightness values of the image.
Step S12, recognizing the image content, and obtaining a preliminary determination result of the scene type.
The scene types of the embodiment of the present application include portrait, landscape, beach, blue sky, macro, and the like. The definition of the scene type is mainly determined according to the foreground and the background of the image, for example, when the foreground of the image is mainly identified as a portrait, the scene type of the image is judged as the portrait; when it is recognized that the foreground (or foreground and background) of an image is mainly a mountain, the scene type of the image is determined as a landscape.
Step S13, acquiring a focal length value of the camera, where the focal length value of the camera is a focal length value used for shooting image content of the camera preview interface.
The focal length value of the camera here refers to a focal length value corresponding to the content of an image to be captured on the camera preview interface. When the terminal equipment is a mobile phone, the focal length value of a camera of the mobile phone is acquired in the step; when the terminal device is a digital camera, the step acquires the focal length value of the digital camera.
In the step, the focal length value of the camera is related to the distance between the object to be shot and the camera, and when the object to be shot is far away from the camera, the focal length value of the camera is smaller; when the object to be shot is closer to the camera, the focal length value of the camera is larger.
Step S14, determining whether the focal length value is greater than a preset focal length value, where the preset focal length value is a focal length value used for shooting the real scene type corresponding to the preliminary determination result.
Specifically, the corresponding focal length values are set for different predefined scene types, and the focal length values corresponding to different scene types may be the same or different. For example, when the scene types are a portrait and a landscape, respectively, since the distance between the portrait and the camera is close to the distance between the landscape and the camera, the preset focal length value corresponding to the portrait is greater than the preset focal length value corresponding to the landscape. Of course, for scene types such as landscapes and beaches, the same focal length value can be preset for the scene types such as landscapes and beaches because the image contents corresponding to the scene types such as landscapes and beaches are all focused on long-distance scenes. In this embodiment, the real scene refers to a real scene, and the pseudo scene refers to a scene displayed on a photo, a book, a periodical, a computer webpage, or the like, for example, when the terminal device is shooting a real beach, a scene corresponding to image content (the beach) of a camera preview interface of the terminal device is the real scene, and the type of the real scene is specifically "the beach"; when the terminal device is shooting a beach image displayed on a computer webpage, a scene corresponding to image content (a beach on a photo) of a camera preview interface of the terminal device is a pseudo scene, and the type of the pseudo scene is specifically 'the beach'.
And step S15, if the focus value is larger than the preset focus value, determining the scene type corresponding to the preliminary determination result as the real scene type.
In this embodiment, when it is determined that the focal length value used for shooting the image content of the camera preview interface is greater than the focal length value used for shooting the real scene type corresponding to the image content, it is determined that the scene type corresponding to the image content of the camera preview interface is the real scene type, and the specific scene type is the scene type corresponding to the preliminary determination result. Specifically, if the preliminary determination result is that the scene type of the image content of the camera preview interface is determined as the portrait and it is determined that the focal length value of the camera is greater than the preset focal length value corresponding to the portrait scene type, the scene type of the image content of the camera preview interface is finally determined as the real portrait scene type.
Optionally, the scene recognition method further includes:
and if the focal length value is less than or equal to a preset focal length value, judging that the scene type of the image content is a pseudo scene type.
In this embodiment, when it is determined that the focal length value used for shooting the image content of the camera preview interface is smaller than or equal to the focal length value used for shooting the real scene type corresponding to the image content, it is determined that the scene type corresponding to the image content of the camera preview interface is a pseudo scene type, and a specific scene type is a scene type corresponding to the preliminary determination result. Specifically, if the preliminary determination result is that the scene type of the image content of the camera preview interface is determined as a portrait, and it is determined that the focal length value of the camera is less than or equal to the preset focal length value corresponding to the portrait scene type, the scene type of the image content of the camera preview interface is finally determined as a pseudo-portrait scene type. For example, when the image content corresponding to the portrait of the photo (or on the image) appears on the camera preview interface, if the preliminary determination result determines that the scene type of the image content of the camera preview interface is the portrait scene type and determines that the focal length value of the camera is less than or equal to the preset focal length value corresponding to the portrait scene type, the scene type of the image content of the camera preview interface is finally determined as the pseudo-portrait scene type.
In the embodiment of the application, the image content of a camera preview interface of the terminal device is acquired, the image content is identified, the primary judgment result of the scene type is obtained, the focal length value of the camera is acquired, whether the focal length value is larger than the preset focal length value or not is judged, if the focal length value is larger than the preset focal length value, the scene type of the image content is judged to be the real scene type, the primary judgment result of the scene type is used as the final judgment result of the scene type, otherwise, the scene type of the image content is judged to be the pseudo scene type. The method has the advantages that the scene type corresponding to the image content can be automatically identified according to the image content, so that the user does not need to manually select the scene type, the operation steps of the user are reduced, and in addition, the scene type corresponding to the image content of the camera preview interface is judged to be the real scene type if the focal length value adopted for shooting the image content of the camera preview interface is larger than the focal length value adopted for shooting the real scene type corresponding to the image content, so that the real scene type and the pseudo scene type can be further identified, and the accuracy of the judgment result of the scene type is improved.
Example two:
fig. 2 is a schematic flowchart of another scene identification method according to an embodiment of the present application. In fig. 2, steps S21 to S25 are the same as steps S11 to S15 of the first embodiment, and are not repeated here.
In step S21, image content of the camera preview interface of the terminal device is acquired.
Step S22, recognizing the image content, and obtaining a preliminary determination result of the scene type.
Step S23, acquiring a focal length value of the camera, where the focal length value of the camera is a focal length value used for shooting image content of the camera preview interface.
Step S24, determining whether the focal length value is greater than a preset focal length value, where the preset focal length value is a focal length value used for shooting the real scene type corresponding to the preliminary determination result.
And step S25, if the focus value is larger than the preset focus value, determining the scene type corresponding to the preliminary determination result as the real scene type.
And step S26, labeling the scene type corresponding to the preliminary judgment result.
Specifically, in order to facilitate a user to know a scene type corresponding to the acquired image content, the scene type is labeled through characters and displayed on a camera preview interface, or the scene type labeled with the image content is broadcasted through voice.
Optionally, in order to reduce resource consumption caused by frequently identifying scene types of the image content, after the step S26, the method includes:
acquiring the labeling time for labeling the scene type corresponding to the preliminary judgment result and first current time; if the difference value between the labeling time and the first current time is larger than a first preset difference threshold value, judging whether a scene type changing instruction is received, and adjusting the current camera parameters to the scene parameters corresponding to the scene type corresponding to the preliminary judgment result after the scene type changing instruction is not received.
Specifically, when the scene type corresponding to the preliminary determination result is labeled, recording the labeling time for labeling the scene type. The method comprises the steps of obtaining first current time, calculating a difference value between the first current time and the marked time, judging whether the difference value between the calculated first current time and the marked time is larger than a first preset difference threshold value, if so, judging whether a scene type change instruction is received, and if not, adjusting current camera parameters according to a scene type corresponding to a preliminary judgment result. For example, assuming that the scene type corresponding to the current camera parameter is a landscape scene type, and the scene type labeled last time is a portrait scene type, since the camera parameter corresponding to the landscape scene type is different from the portrait scene type, the current camera parameter needs to be adjusted to the camera parameter corresponding to the portrait scene type.
In the above step, corresponding camera parameters are set for different scene types in advance, and it is needless to say that, in the case where the camera parameters corresponding to the two scene types are the same, "adjusting the current camera parameter to the scene parameter corresponding to the scene type corresponding to the preliminary determination result after the scene type change instruction is not received" specifically includes: after a scene type change instruction is not received, judging whether a scene type corresponding to the primary judgment result is marked last time (the scene type corresponding to the primary judgment result is marked last time, namely the last marked scene type of the last marked scene type) is a preset scene type, if so, adjusting the camera parameter, and if not, adjusting the current camera parameter to be the scene parameter corresponding to the scene type (namely the last marked scene type). And the preset scene type is determined according to the scene type of the last annotation. For example, assuming that the camera parameters corresponding to the scenery scene type are the same as the camera parameters corresponding to the beach scene type, when the scene type labeled last time is the scenery scene type, it is determined whether the scene type labeled last time is the beach scene type (the preset scene type is determined to be the beach scene type according to the scenery scene type), if so, the camera parameters are not adjusted, and if not, the current camera parameters are adjusted to the scene parameters corresponding to the scene type (i.e., the scene type labeled last time).
Optionally, in order to reduce resource consumption caused by frequently identifying scenes of image content, after determining the scene type corresponding to the preliminary determination result as the real scene type, the method includes:
acquiring second current time of a camera preview interface and time of obtaining a last judgment result of a real scene type, wherein the last judgment result of the real scene type is as follows: and obtaining a latest judgment result before the second current time of the camera preview interface is acquired. And if the difference value between the second current time of the camera preview interface and the time of obtaining the judgment result of the last real scene type is smaller than a second preset difference threshold value, judging whether the moving speed of the terminal equipment is smaller than a preset speed threshold value. And if the image content is smaller than the preset speed threshold, judging the scene type corresponding to the image content displayed on the camera preview interface as the scene type which is the same as the judgment result of the last real scene type.
The moving speed of the terminal equipment can be obtained by acquiring data of an acceleration sensor arranged on the terminal equipment.
In practical situations, when the interval time is short (for example, when the second preset difference threshold is 1 second) and the moving speed of the terminal device is small, the image content of the camera preview interface generally does not change much, and at this time, the last determined scene type is determined as the scene type of the image content of the current camera preview interface, and optionally, the determined scene type is labeled. Of course, if the time interval between the second current time and the time when the last determination result of the real scene type is obtained is longer (for example, if the time interval is greater than or equal to the second preset difference threshold), the process returns to step S11.
Optionally, after the determining the scene type corresponding to the preliminary determination result as the real scene type, the method includes:
acquiring third current time of a camera preview interface and time of obtaining a last judgment result of a real scene type, wherein the last judgment result of the real scene type is as follows: a latest one of the decision results obtained before the current time at which the camera preview interface is acquired. And if the difference value between the third current time of the camera preview interface and the time of obtaining the judgment result of the last real scene type is smaller than a second preset difference threshold value, judging whether the moving distance of the terminal equipment is smaller than a preset distance threshold value. And if the distance is smaller than the preset distance threshold, judging the scene type corresponding to the image content displayed on the camera preview interface as the scene type which is the same as the judgment result of the last real scene type.
The moving distance of the terminal device can be determined by calculating the position difference of the terminal device at two adjacent time points. Specifically, a locator is provided on the terminal device, or a built-in positioning function, such as a built-in Global Positioning System (GPS), determines the position of the terminal device at any point in time.
Optionally, the determined scene type is annotated. Of course, if the time interval between the current time and the time when the last real scene type is obtained is longer (e.g., if the time interval is greater than or equal to the second preset difference threshold), the process returns to step S11.
Optionally, the identifying the image content and obtaining a preliminary determination result of the scene type includes:
and identifying the image content through the trained scene model, and obtaining a primary judgment result of the scene type according to an output result of the trained scene model.
Specifically, different scene types are recognized in advance through deep learning and training of the convolutional neural network, so that scene models corresponding to the different scene types are obtained, the scene types are recognized through the convolutional neural network, and the recognition accuracy rate of more than 98% can be guaranteed. And during specific identification, inputting the image content to be acquired into the trained scene model, and obtaining a primary judgment result of the scene type corresponding to the image content according to the output result of the scene model. Optionally, when the scene type of the acquired image content is identified, the acquired image content needs to be identified one by one according to the scene model, and the scene type corresponding to the image content shot by the user in a period of time is usually unchanged, so that priorities of different scene types can be predetermined, and then the acquired image content is identified according to the priorities of the scene types. For example, if the priority levels of the scene types are portrait, landscape, beach, blue sky, macro, and the like, respectively, after the image content is acquired, first, whether the image content is portrait is identified according to the scene model for identifying the portrait, if not, whether the image content is landscape is identified according to the scene model for identifying the landscape, and the rest is analogized in sequence.
Optionally, the identifying the image content and obtaining a preliminary determination result of the scene type includes:
acquiring a predetermined scene type characteristic, and identifying the image content according to the predetermined scene type characteristic to obtain a preliminary judgment result of the scene type. Specifically, the scene type characteristics corresponding to different scene types are determined in advance for the scene types, for example, for the image scene type, the corresponding scene type characteristics include: human face features, human skin color features, and the like; for the landscape scene type, the corresponding scene type characteristics comprise: mountain features, river features, etc. And when the features included in the image content are matched with the corresponding scene type features, judging that the scene type of the image content is the scene type corresponding to the scene type features. In the step, the scene type of the image content is identified through the scene type characteristics, so that the identification speed of the image content can be greatly improved.
Example three:
fig. 3 shows a schematic structural diagram of a scene recognition apparatus provided in an embodiment of the present application, where the scene recognition apparatus is applicable to a terminal device, and for convenience of description, only a part related to the embodiment of the present application is shown:
the scene recognition apparatus includes: an image content acquisition unit 31, an image content recognition unit 32, a focal length value acquisition unit 33, a scene type preliminary judgment unit 34, and a scene type determination unit 35. Wherein:
an image content acquiring unit 31 for acquiring image content of a camera preview interface of the terminal device.
The terminal equipment comprises equipment such as a mobile phone, a tablet personal computer and a digital camera; the image content includes information of pixel values, luminance values, and the like of the image.
And an image content identification unit 32, configured to identify the image content, and obtain a preliminary determination result of the scene type.
The scene types of the embodiment of the present application include portrait, landscape, beach, blue sky, macro, and the like. The definition of the scene type is mainly determined according to the foreground and the background of the image.
A focal length value obtaining unit 33, configured to obtain a focal length value of a camera, where the focal length value of the camera is a focal length value used for shooting image content of the camera preview interface.
And a scene type preliminary judgment unit 34, configured to judge whether the focal length value is greater than a preset focal length value, where the preset focal length value is a focal length value used for shooting a real scene type corresponding to the preliminary judgment result.
Specifically, the corresponding focal length values are set for different predefined scene types, and the focal length values corresponding to different scene types may be the same or different.
And a scene type determining unit 35, configured to determine, if the focal length is greater than the preset focal length value, a scene type corresponding to the preliminary determination result as a real scene type.
Optionally, the scene recognition apparatus further includes:
and the pseudo scene type judging unit is used for judging the scene type of the image content to be a pseudo scene type if the scene type is smaller than or equal to a preset focal length value.
In the embodiment of the application, the scene type corresponding to the image content can be automatically identified according to the image content, so that the user does not need to manually select the scene type, the operation steps of the user are reduced, and the scene type corresponding to the image content of the camera preview interface is determined to be the real scene type by comparing the focal length value adopted by shooting the image content of the camera preview interface with the focal length value adopted by shooting the real scene type corresponding to the image content if the focal length value is larger than the focal length value adopted by shooting the image content, so that the real scene type and the pseudo scene type can be further identified, and the accuracy of the determination result of the scene type is improved.
Example four:
fig. 4 is a schematic structural diagram of another scene recognition device provided in an embodiment of the present application, where in the embodiment of the present application, the scene recognition device includes: the image content acquiring unit 31, the image content identifying unit 32, the focal length value acquiring unit 33, the scene type preliminary judging unit 34, and the scene type determining unit 35, and the result output unit 36.
The result output unit 36 is used for labeling the scene type corresponding to the preliminary determination result.
Specifically, in order to facilitate a user to know a scene type corresponding to the acquired image content, the scene type is marked by a character and displayed on a camera preview interface, or the scene type marked by the image content is broadcasted by voice.
Optionally, in order to reduce resource consumption caused by frequently identifying scene types of the image content, the scene identification apparatus further includes:
a scene parameter adjusting unit, configured to obtain a labeling time for labeling a scene type corresponding to the preliminary determination result and a first current time; if the difference value between the labeling time and the first current time is larger than a first preset difference threshold value, judging whether a scene type changing instruction is received, and adjusting the current camera parameters to the scene parameters corresponding to the scene type corresponding to the preliminary judgment result after the scene type changing instruction is not received.
Specifically, when the scene type corresponding to the preliminary determination result is labeled, recording the labeling time for labeling the scene type. The method comprises the steps of obtaining first current time, calculating a difference value between the first current time and the marked time, judging whether the difference value between the calculated first current time and the output time is larger than a first preset difference threshold value, if so, judging whether a scene type change instruction is received, and if not, adjusting current camera parameters according to a scene type corresponding to a preliminary judgment result.
Of course, for the case that the camera parameters corresponding to the two scene types are the same, "adjusting the current camera parameter to the scene parameter corresponding to the scene type corresponding to the preliminary determination result after the scene type change instruction is not received" specifically includes: after a scene type change instruction is not received, judging whether a scene type corresponding to the primary judgment result is marked last time (the scene type corresponding to the primary judgment result is marked last time, namely the last marked scene type of the last marked scene type) is a preset scene type, if so, adjusting the camera parameter, and if not, adjusting the current camera parameter to be the scene parameter corresponding to the scene type (namely the last marked scene type). And the preset scene type is determined according to the scene type of the last annotation.
Optionally, in order to reduce resource consumption caused by frequently identifying scenes of the image content, the scene identification apparatus further includes:
a unit for judging whether the first scene type is repeated, configured to obtain a second current time of the camera preview interface and a time of obtaining a last judgment result of the real scene type, where the last judgment result of the real scene type is: and obtaining a latest judgment result before the second current time of the camera preview interface is acquired. And if the difference value between the second current time of the camera preview interface and the time of obtaining the judgment result of the last real scene type is smaller than a second preset difference threshold value, judging whether the moving speed of the terminal equipment is smaller than a preset speed threshold value. And if the image content is smaller than the preset speed threshold, judging the scene type corresponding to the image content displayed on the camera preview interface as the scene type which is the same as the judgment result of the last real scene type.
The moving speed of the terminal equipment can be obtained by acquiring data of an acceleration sensor arranged on the terminal equipment.
Optionally, the scene recognition apparatus further includes:
a second scene type re-judging unit, configured to obtain a third current time of the camera preview interface and a time of obtaining a last judgment result of the real scene type, where the last judgment result of the real scene type is: and obtaining a latest judgment result before the third current time of the camera preview interface is acquired. And if the difference value between the third current time of the camera preview interface and the time of obtaining the judgment result of the last real scene type is smaller than a second preset difference threshold value, judging whether the moving distance of the terminal equipment is smaller than a preset distance threshold value. And if the distance is smaller than the preset distance threshold, judging the scene type corresponding to the image content displayed on the camera preview interface as the scene type which is the same as the judgment result of the last real scene type.
The moving distance of the terminal device can be determined by calculating the position difference of the terminal device at two adjacent time points. Specifically, a locator is provided on the terminal device, or a built-in positioning function, such as a built-in GPS, determines the position of the terminal device at any point in time.
Optionally, the determined scene type is labeled by the result output unit 36. Of course, if the time interval between the third current time and the time when the last real scene type is obtained is longer (for example, when the time interval is greater than or equal to the second preset difference threshold), the image content obtaining unit 31 returns.
Optionally, the image content identifying unit 32 according to the embodiment of the present application includes:
and the scene model identification module is used for identifying the image content through the trained scene model and obtaining a primary judgment result of the scene type according to the output result of the trained scene model.
Specifically, different scene types are recognized in advance through deep learning and training of the convolutional neural network, and a scene model corresponding to the different scene types is obtained.
Optionally, when the scene type of the acquired image content is identified, the acquired image content needs to be identified one by one according to the scene model, and the scene type corresponding to the image content shot by the user in a period of time is usually unchanged, so that priorities of different scene types can be predetermined, and then the acquired image content is identified according to the priorities of the scene types.
Optionally, the image content identifying unit 32 according to the embodiment of the present application includes:
and the scene type feature matching module is used for acquiring the predetermined scene type features, identifying the image content according to the predetermined scene type features and obtaining the initial judgment result of the scene type.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example five:
fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the various scene recognition method embodiments described above, such as the steps S11-S15 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 21 to 25 shown in fig. 2.
Illustratively, the computer program 52 may be partitioned into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 52 in the terminal device 5. For example, the computer program 52 may be divided into an image content acquiring unit, an image content identifying unit, a focal length value acquiring unit, a scene type preliminary judging unit, and a scene type determining unit, and the specific functions of each unit are as follows:
the terminal equipment comprises an image content acquisition unit, a display unit and a display unit, wherein the image content acquisition unit is used for acquiring the image content of a camera preview interface of the terminal equipment;
the image content identification unit is used for identifying the image content to obtain a primary judgment result of the scene type;
the device comprises a focal length value acquisition unit, a focus value acquisition unit and a focus value display unit, wherein the focal length value acquisition unit is used for acquiring a focal length value of a camera, and the focal length value of the camera is a focal length value adopted for shooting image content of a camera preview interface;
a scene type preliminary judgment unit, configured to judge whether the focal length value is greater than a preset focal length value, where the preset focal length value is a focal length value used for shooting a real scene type corresponding to the preliminary judgment result;
and the scene type determining unit is used for determining the scene type corresponding to the preliminary determination result as the real scene type if the focus value is larger than the preset focus value.
The terminal device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is merely an example of a terminal device 5 and does not constitute a limitation of terminal device 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing the computer program and other programs and data required by the terminal device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for scene recognition, comprising:
acquiring image content of a camera preview interface of the terminal equipment;
identifying the image content to obtain a primary judgment result of the scene type;
acquiring a focal length value of a camera, wherein the focal length value of the camera is a focal length value adopted for shooting image content of a camera preview interface;
judging whether the focal length value is larger than a preset focal length value, wherein the preset focal length value is used for shooting a real scene type corresponding to the preliminary judgment result;
and if the initial judgment result is larger than the preset focal length value, judging the scene type corresponding to the initial judgment result as a real scene type.
2. The method according to claim 1, wherein after determining the scene type corresponding to the preliminary determination result as the real scene type if the scene type is larger than a preset focal length value, the method comprises:
and marking the scene type corresponding to the preliminary judgment result.
3. The method according to claim 2, wherein after said labeling the scene type corresponding to the preliminary decision result, the method comprises:
acquiring the labeling time for labeling the scene type corresponding to the preliminary judgment result and first current time;
if the difference value between the labeling time and the first current time is larger than a first preset difference threshold value, judging whether a scene type changing instruction is received, and adjusting the current camera parameters to the scene parameters corresponding to the scene type corresponding to the preliminary judgment result after the scene type changing instruction is not received.
4. The method according to claim 1, wherein after determining the scene type corresponding to the preliminary determination result as the real scene type, the method comprises:
acquiring second current time of a camera preview interface and time of obtaining a last judgment result of a real scene type, wherein the last judgment result of the real scene type is as follows: obtaining a latest judgment result before a second current time of the camera preview interface is obtained;
if the difference value between the second current time of the camera preview interface and the time of obtaining the judgment result of the last real scene type is smaller than a second preset difference threshold value, judging whether the moving speed of the terminal equipment is smaller than a preset speed threshold value or not;
and if the image content is smaller than the preset speed threshold, judging the scene type corresponding to the image content displayed on the camera preview interface as the scene type which is the same as the judgment result of the last real scene type.
5. The method according to claim 1, wherein after determining the scene type corresponding to the preliminary determination result as the real scene type, the method comprises:
acquiring third current time of a camera preview interface and time of obtaining a last judgment result of a real scene type, wherein the last judgment result of the real scene type is as follows: obtaining a latest judgment result before a third current time of the camera preview interface is obtained;
if the difference value between the third current time of the camera preview interface and the time of obtaining the judgment result of the last real scene type is smaller than a second preset difference threshold value, judging whether the moving distance of the terminal equipment is smaller than a preset distance threshold value or not;
and if the distance is smaller than the preset distance threshold, judging the scene type corresponding to the image content displayed on the camera preview interface as the scene type which is the same as the judgment result of the last real scene type.
6. The scene recognition method according to any one of claims 1 to 5, wherein the recognizing the image content and obtaining the preliminary determination result of the scene type comprises:
and identifying the image content through the trained scene model, and obtaining a primary judgment result of the scene type according to an output result of the trained scene model.
7. The scene recognition method according to claim 1, further comprising:
and if the focal length value is less than or equal to a preset focal length value, judging that the scene type of the image content is a pseudo scene type.
8. A scene recognition apparatus, comprising:
the terminal equipment comprises an image content acquisition unit, a display unit and a display unit, wherein the image content acquisition unit is used for acquiring the image content of a camera preview interface of the terminal equipment;
the image content identification unit is used for identifying the image content to obtain a primary judgment result of the scene type;
the device comprises a focal length value acquisition unit, a focus value acquisition unit and a focus value display unit, wherein the focal length value acquisition unit is used for acquiring a focal length value of a camera, and the focal length value of the camera is a focal length value adopted for shooting image content of a camera preview interface;
a scene type preliminary judgment unit, configured to judge whether the focal length value is greater than a preset focal length value, where the preset focal length value is a focal length value used for shooting a real scene type corresponding to the preliminary judgment result;
and the scene type determining unit is used for determining the scene type corresponding to the preliminary determination result as the real scene type if the focus value is larger than the preset focus value.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201810605325.0A 2018-06-13 2018-06-13 Scene identification method and device and terminal equipment Active CN108769527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810605325.0A CN108769527B (en) 2018-06-13 2018-06-13 Scene identification method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810605325.0A CN108769527B (en) 2018-06-13 2018-06-13 Scene identification method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108769527A CN108769527A (en) 2018-11-06
CN108769527B true CN108769527B (en) 2020-06-02

Family

ID=64022121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810605325.0A Active CN108769527B (en) 2018-06-13 2018-06-13 Scene identification method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108769527B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683239B (en) * 2020-06-22 2022-11-01 贝壳技术有限公司 Control method and device of three-dimensional camera and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917959A (en) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 Photographing method and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007081473A (en) * 2005-09-09 2007-03-29 Eastman Kodak Co Imaging apparatus having plural optical system
JP5148989B2 (en) * 2007-12-27 2013-02-20 イーストマン コダック カンパニー Imaging device
CN107005655B (en) * 2014-12-09 2020-06-23 快图有限公司 Image processing method
CN107426489A (en) * 2017-05-05 2017-12-01 北京小米移动软件有限公司 Processing method, device and terminal during shooting image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917959A (en) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 Photographing method and terminal

Also Published As

Publication number Publication date
CN108769527A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
EP3547218B1 (en) File processing device and method, and graphical user interface
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN110113534B (en) Image processing method, image processing device and mobile terminal
EP3125135A1 (en) Picture processing method and device
CN108898082B (en) Picture processing method, picture processing device and terminal equipment
CN107820020A (en) Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters
US20170053156A1 (en) Human face recognition method, apparatus and terminal
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN108769634B (en) Image processing method, image processing device and terminal equipment
CN109215037B (en) Target image segmentation method and device and terminal equipment
CN103617432A (en) Method and device for recognizing scenes
CN108737739B (en) Preview picture acquisition method, preview picture acquisition device and electronic equipment
CN108961183B (en) Image processing method, terminal device and computer-readable storage medium
CN109118447B (en) Picture processing method, picture processing device and terminal equipment
US20210335391A1 (en) Resource display method, device, apparatus, and storage medium
CN110463177A (en) The bearing calibration of file and picture and device
CN111970437B (en) Text shooting method, wearable device and storage medium
CN113873166A (en) Video shooting method and device, electronic equipment and readable storage medium
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
CN110166696B (en) Photographing method, photographing device, terminal equipment and computer-readable storage medium
CN109358927B (en) Application program display method and device and terminal equipment
CN109408652B (en) Picture searching method, device and equipment
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN108769527B (en) Scene identification method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant