CN108153811B - Input method and device for search content, mobile terminal and storage medium - Google Patents

Input method and device for search content, mobile terminal and storage medium Download PDF

Info

Publication number
CN108153811B
CN108153811B CN201711192535.3A CN201711192535A CN108153811B CN 108153811 B CN108153811 B CN 108153811B CN 201711192535 A CN201711192535 A CN 201711192535A CN 108153811 B CN108153811 B CN 108153811B
Authority
CN
China
Prior art keywords
camera
mobile terminal
user
preset
matching degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711192535.3A
Other languages
Chinese (zh)
Other versions
CN108153811A (en
Inventor
梁金辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201711192535.3A priority Critical patent/CN108153811B/en
Publication of CN108153811A publication Critical patent/CN108153811A/en
Application granted granted Critical
Publication of CN108153811B publication Critical patent/CN108153811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The invention is suitable for the technical field of computers, and provides an input method, an input device, a mobile terminal and a storage medium for searching contents, wherein the method comprises the following steps: when a photographing search request input by a mobile terminal user is received, a first camera and a second camera on the mobile terminal are started, the user expression characteristics of the mobile terminal user are obtained through the first camera, the matching degree of the user expression characteristics and the preset face expression characteristics is calculated, when the matching degree obtained through calculation is located in the preset matching degree range, a target photographing object image in the preset range in front of the second camera is obtained through the second camera, characters corresponding to the target photographing object image are obtained, and the obtained characters are input into a preset search box, so that the process of obtaining search content is optimized, the automation degree of photographing search is improved, and the efficiency of photographing search is improved.

Description

Input method and device for search content, mobile terminal and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to an input method and device of search contents, a mobile terminal and a storage medium.
Background
At present, when a user reads books, if a problem needing to be searched is encountered, the problem needing to be searched can be manually input for searching, or the problem needing to be searched can be photographed for searching. Since the photo search is more convenient than manually entering a question and then searching, the user prefers the photo search. In the prior art of photographing search, a user needs to start a camera and then manually photograph, and then manually search by using a photographed image as input content, which results in complicated steps and low efficiency of photographing search. In addition, when the mobile terminal is too large, manual operation of the user in the photographing search process is inconvenient.
Disclosure of Invention
The invention aims to provide an input method, an input device, a mobile terminal and a storage medium for searching contents, and aims to solve the problem that in the prior art, the shooting searching efficiency is low due to the fact that the steps for obtaining the searching contents in the shooting searching process are complicated and the automation degree is low.
In one aspect, the present invention provides an input method of search contents, the method including the steps of:
when a photographing search request input by a mobile terminal user is received, starting a first camera and a second camera on the mobile terminal;
acquiring user expression characteristics of the mobile terminal user through the first camera, and calculating the matching degree of the user expression characteristics and preset human face expression characteristics;
when the calculated matching degree is within a preset matching degree range, acquiring a target shooting object image in a preset range in front of a second camera through the second camera;
and acquiring characters corresponding to the target shooting object image, and inputting the acquired characters into a preset search box.
In another aspect, the present invention provides an input apparatus for searching for contents, the apparatus including:
the camera starting unit is used for starting a first camera and a second camera on the mobile terminal when receiving a photographing search request input by a mobile terminal user;
the matching degree calculation unit is used for acquiring the user expression characteristics of the mobile terminal user through the first camera and calculating the matching degree of the user expression characteristics and preset human face expression characteristics;
the image acquisition unit is used for acquiring a target shooting object image in a preset range in front of the second camera through the second camera when the calculated matching degree is in a preset matching degree range; and
and the character input unit is used for acquiring characters corresponding to the target shooting object image and inputting the acquired characters into a preset search box.
In another aspect, the present invention further provides a mobile terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the input method of search content when executing the computer program.
In another aspect, the present invention also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the input method of searching for content as described.
When a photographing search request input by a mobile terminal user is received, a first camera and a second camera on the mobile terminal are started, the user expression characteristics of the mobile terminal user are obtained through the first camera, the matching degree of the user expression characteristics and the preset human face expression characteristics is calculated, when the calculated matching degree is within the preset matching degree range, a target shooting object image in a preset range in front of the second camera is obtained through the second camera, characters corresponding to the target shooting object image are obtained, and the obtained characters are input into a preset search box, so that the process of obtaining search content is optimized, the automation degree of photographing search is improved, and the efficiency of photographing search is improved.
Drawings
Fig. 1 is a flowchart illustrating an implementation of an input method for search content according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an input device for searching contents according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an input device for searching contents according to a third embodiment of the present invention; and
fig. 4 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows an implementation flow of an input method for search content according to a first embodiment of the present invention, and for convenience of description, only the relevant parts according to the first embodiment of the present invention are shown, which are detailed as follows:
in step S101, when a photo search request input by a user of the mobile terminal is received, a first camera and a second camera on the mobile terminal are started.
The embodiment of the invention is suitable for mobile terminals, in particular to portable mobile terminals such as mobile phones, tablet computers or learning machines and the like which are provided with two or more cameras, so that users can conveniently use the mobile terminals to carry out quick and accurate shooting search. In the embodiment of the invention, the first camera on the mobile terminal is a front camera, the second camera is a rear camera, when the mobile terminal does not perform photographing search, the first camera and the second camera can be in a closed state, and when a photographing search request input by a user of the mobile terminal is received, the first camera and the second camera on the mobile terminal are started, so that the intelligent degree of the mobile terminal is improved, and the energy consumption of the mobile terminal is reduced.
Preferably, after the first camera and the second camera on the mobile terminal are started, the shooting range identifier or the inclination identifier is projected towards a preset direction or the shooting range identifier and the inclination identifier are projected simultaneously through the identifier projector on the mobile terminal, so that a mobile terminal user can conveniently adjust the shooting content according to the identifiers, the content searching accuracy is improved, and the shooting searching accuracy is further improved.
The identification projector may be a dedicated light source for projecting an identification (e.g., a photographing range identification, an inclined identification) to a photographing object, the photographing range identification is used for displaying a photographing range to a mobile terminal user, the photographing range identification may be a cursor selection frame, such as a rectangular or oval cursor selection frame, the photographing range at this time is a range framed or covered by the cursor selection frame, the photographing range identification may also be a single cursor, such as a cross cursor, and the photographing range at this time is a range framed or covered by a preset rectangular or oval shape including the cross cursor. The inclination identification is used for prompting whether the content to be photographed is inclined or not to a mobile terminal user so as to facilitate the user to adjust the photographing content according to the inclination identification, and the inclination identification can be one or more line segments.
In step S102, the user expression features of the mobile terminal user are acquired through the first camera, and the matching degree between the user expression features and the preset human face expression features is calculated.
In the embodiment of the invention, the matching degree range between the user expression characteristics triggering the camera to take pictures and the preset human face expression characteristics is preset, and the user expression characteristics can comprise the organ characteristics, the texture characteristics and the predefined characteristic points of the human face. After a first camera and a second camera on the mobile terminal are started, the first camera acquires the user expression characteristics of the mobile terminal user in real time, the matching degree of the user expression characteristics and the preset human face expression characteristics is calculated, whether the mobile terminal user confirms to take a picture or not is judged according to the calculated matching degree, and then automatic picture taking is achieved.
Preferably, when the first camera is used for acquiring the user expression characteristics of the user, the first camera is used for acquiring a facial image of the mobile terminal user, and face recognition is performed on the facial image so as to acquire the user expression characteristics of the mobile terminal user, so that the accuracy of acquiring the user expression characteristics is improved. Further preferably, before the first camera acquires the user expression features of the mobile terminal user, the target photographic object image in front of the second camera is output to the mobile terminal user through the screen of the mobile terminal, so that the mobile terminal user can conveniently confirm the target photographic object image.
In step S103, when the calculated matching degree is within the preset matching degree range, acquiring, by the second camera, an image of the target photographic object within the preset range in front of the second camera.
In the embodiment of the invention, if the matching degree of the user expression characteristics and the preset human face expression characteristics is in the preset matching degree range, the mobile terminal user confirms the target shooting object image, and at the moment, the shooting instruction is triggered to acquire the target shooting object image through the second camera. If the matching degree of the user expression features and the preset human face expression features is not within the preset matching degree range, the mobile terminal user does not confirm the target shooting object image, at the moment, the user expression features of the mobile terminal user are continuously obtained through the first camera, and the obtaining of the expression features of the mobile terminal user is stopped until the obtained matching degree of the user expression features and the preset human face expression features is within the preset matching degree range.
In step S104, a character corresponding to the target photographic subject image is acquired, and the acquired character is input to a preset search box.
In the embodiment of the present invention, preferably, when the text corresponding to the target photographic object image is acquired and the acquired text is input into the preset search box, text recognition is performed on the target photographic object image first to obtain the text included in the target photographic object image, and then the text included in the target photographic object image is input into the preset search box, so that the automation level of search content acquisition is improved, and the accuracy of search content is improved at the same time. Further preferably, before character recognition is performed on the target photographic subject image, inclination correction processing is performed on the target photographic subject image, thereby further improving the accuracy of subsequent character recognition.
In the embodiment of the invention, the first camera is used for acquiring the user expression characteristics of the mobile terminal user, and the second camera is automatically triggered to acquire the target shooting object image according to the user expression characteristics of the mobile terminal user, so that the process of acquiring the search content is optimized, the accuracy and the automation degree of shooting search are improved, and the shooting search efficiency is further improved.
Example two:
fig. 2 shows a structure of an input device for searching contents according to a second embodiment of the present invention, and for convenience of description, only the parts related to the second embodiment of the present invention are shown, which includes:
the camera starting unit 21 is configured to start the first camera and the second camera on the mobile terminal when a photographing search request input by a user of the mobile terminal is received.
In the embodiment of the invention, the first camera on the mobile terminal is a front camera, the second camera is a rear camera, when the mobile terminal does not perform photographing search, the first camera and the second camera can be in a closed state, and when a photographing search request input by a user of the mobile terminal is received, the first camera and the second camera on the mobile terminal are started through the camera starting unit 21, so that the intelligent degree of the mobile terminal is improved, and the energy consumption of the mobile terminal is reduced.
And the matching degree calculation unit 22 is configured to acquire the user expression features of the mobile terminal user through the first camera, and calculate the matching degree between the user expression features and the preset human face expression features.
In the embodiment of the invention, the matching degree range of the user expression characteristics triggering the camera to take pictures and the preset human face expression characteristics is preset, and the user expression characteristics can comprise the organ characteristics, the texture characteristics and the predefined characteristic points of the human face. After the first camera and the second camera on the mobile terminal are started, the matching degree calculation unit 22 is used for acquiring the user expression characteristics of the mobile terminal user in real time by using the first camera, calculating the matching degree of the user expression characteristics and the preset human face expression characteristics, and judging whether the mobile terminal user confirms to take a picture according to the calculated matching degree, so that automatic picture taking is realized.
And the image acquisition unit 23 is configured to acquire, by the second camera, an image of the target photographic object within a preset range in front of the second camera when the calculated matching degree is within the preset matching degree range.
In the embodiment of the present invention, if the matching degree between the user expression feature and the preset facial expression feature is within the preset matching degree range, it indicates that the mobile terminal user has confirmed the target photographic object image, and at this time, the image obtaining unit 23 triggers a photographing instruction to obtain the target photographic object image through the second camera. If the matching degree of the user expression features and the preset human face expression features is not within the preset matching degree range, the mobile terminal user does not confirm the target shooting object image, at the moment, the user expression features of the mobile terminal user are continuously obtained through the first camera, and the obtaining of the expression features of the mobile terminal user is stopped until the obtained matching degree of the user expression features and the preset human face expression features is within the preset matching degree range.
And a character input unit 24, configured to acquire characters corresponding to the target photographic subject image, and input the acquired characters into a preset search box.
In the embodiment of the invention, the matching degree calculation unit 22 uses the first camera to obtain the user expression characteristics of the mobile terminal user, and the image obtaining unit 23 automatically triggers the second camera to obtain the target shooting object image according to the user expression characteristics of the mobile terminal user, so that the process of obtaining the search content is optimized, the accuracy and the automation degree of the shooting search are improved, and the efficiency of the shooting search is further improved.
In the embodiment of the present invention, each unit of the input device for searching for content may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein.
Example three:
fig. 3 shows a structure of an input device for searching contents according to a third embodiment of the present invention, and for convenience of description, only the parts related to the third embodiment of the present invention are shown, which includes:
the camera starting unit 31 is configured to start the first camera and the second camera on the mobile terminal when a photographing search request input by a user of the mobile terminal is received.
In the embodiment of the invention, the first camera on the mobile terminal is a front camera, the second camera is a rear camera, when the mobile terminal does not perform photographing search, the first camera and the second camera can be in a closed state, and when a photographing search request input by a user of the mobile terminal is received, the first camera and the second camera on the mobile terminal are started through the camera starting unit 31, so that the intelligent degree of the mobile terminal is improved, and the energy consumption of the mobile terminal is reduced.
Preferably, after the first camera and the second camera on the mobile terminal are started, the shooting range identifier or the inclination identifier is projected towards a preset direction or the shooting range identifier and the inclination identifier are projected simultaneously through the identifier projector on the mobile terminal, so that a mobile terminal user can conveniently adjust the shooting content according to the identifiers, the content searching accuracy is improved, and the shooting searching accuracy is further improved.
The identification projector may be a dedicated light source for projecting an identification (e.g., a photographing range identification, an inclined identification) to a photographing object, the photographing range identification is used for displaying a photographing range to a mobile terminal user, the photographing range identification may be a cursor selection frame, such as a rectangular or oval cursor selection frame, the photographing range at this time is a range framed or covered by the cursor selection frame, the photographing range identification may also be a single cursor, such as a cross cursor, and the photographing range at this time is a range framed or covered by a preset rectangular or oval shape including the cross cursor. The inclination identification is used for prompting whether the content to be photographed is inclined or not to a mobile terminal user so as to facilitate the user to adjust the photographing content according to the inclination identification, and the inclination identification can be one or more line segments.
And the matching degree calculating unit 32 is configured to acquire the user expression features of the mobile terminal user through the first camera, and calculate the matching degree between the user expression features and the preset human face expression features.
In the embodiment of the invention, the matching degree range between the user expression characteristics triggering the camera to take pictures and the preset human face expression characteristics is preset, and the user expression characteristics can comprise the organ characteristics, the texture characteristics and the predefined characteristic points of the human face. After the first camera and the second camera on the mobile terminal are started, the matching degree calculation unit 32 is used for acquiring the user expression characteristics of the mobile terminal user in real time by using the first camera, calculating the matching degree of the user expression characteristics and the preset human face expression characteristics, and judging whether the mobile terminal user confirms to take a picture according to the calculated matching degree, so that automatic picture taking is realized.
Preferably, when the first camera is used for acquiring the user expression characteristics of the user, the first camera is used for acquiring a facial image of the mobile terminal user, and face recognition is performed on the facial image so as to acquire the user expression characteristics of the mobile terminal user, so that the accuracy of acquiring the user expression characteristics is improved. Further preferably, before the first camera acquires the user expression features of the mobile terminal user, the target photographic object image in front of the second camera is output to the mobile terminal user through the screen of the mobile terminal, so that the mobile terminal user can conveniently confirm the target photographic object image.
And the image acquisition unit 33 is configured to acquire, by the second camera, an image of the target photographic object within a preset range in front of the second camera when the calculated matching degree is within the preset matching degree range.
In the embodiment of the present invention, if the matching degree between the user expression feature and the preset human face expression feature is within the preset matching degree range, it indicates that the mobile terminal user has confirmed the target photographic object image, and at this time, the image obtaining unit 33 triggers a photographing instruction to obtain the target photographic object image through the second camera. If the matching degree of the user expression features and the preset human face expression features is not within the preset matching degree range, the mobile terminal user does not confirm the target shooting object image, at the moment, the user expression features of the mobile terminal user are continuously obtained through the first camera, and the obtaining of the expression features of the mobile terminal user is stopped until the obtained matching degree of the user expression features and the preset human face expression features is within the preset matching degree range.
And a character input unit 34 for acquiring characters corresponding to the target photographic subject image and inputting the acquired characters into a preset search box.
In the embodiment of the present invention, preferably, when the text corresponding to the target photographic object image is acquired and the acquired text is input into the preset search box, text recognition is performed on the target photographic object image first to obtain the text included in the target photographic object image, and then the text included in the target photographic object image is input into the preset search box, so that the automation level of search content acquisition is improved, and the accuracy of search content is improved at the same time. Further preferably, before character recognition is performed on the target photographic subject image, inclination correction processing is performed on the target photographic subject image, thereby further improving the accuracy of subsequent character recognition.
In the embodiment of the invention, the matching degree calculation unit 32 uses the first camera to obtain the user expression characteristics of the mobile terminal user, and the image obtaining unit 33 automatically triggers the second camera to obtain the target shooting object image according to the user expression characteristics of the mobile terminal user, so that the process of obtaining the search content is optimized, the accuracy and the automation degree of the shooting search are improved, and the efficiency of the shooting search is further improved.
Therefore, it is preferable that the matching degree calculation unit 32 includes:
a feature obtaining unit 321, configured to obtain a facial image of the mobile terminal user through a first camera, and perform face recognition on the facial image to obtain a user expression feature of the mobile terminal user;
preferably, the text input unit 34 includes:
a character recognition unit 341, configured to perform character recognition on the target photographic subject image to obtain characters included in the target photographic subject image; and
a text input sub-unit 342 for inputting text included in the target photographic subject image into a preset search box.
In the embodiment of the present invention, each unit of the input device for searching for content may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein.
Example four:
fig. 4 shows a structure of a mobile terminal according to a fourth embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown.
The mobile terminal 4 of an embodiment of the present invention comprises a processor 40, a memory 41 and a computer program 42 stored in the memory 41 and operable on the processor 40. The processor 40 executes the computer program 42 to implement the steps in the above-described embodiments of the input method of each search content, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 40, when executing the computer program 42, implements the functions of the units in the above-described device embodiments, for example, the functions of the units 21 to 24 shown in fig. 2 and the units 31 to 34 shown in fig. 3.
In the embodiment of the present invention, when the processor 40 executes the computer program 42 to implement the steps in the above-mentioned embodiments of the input method of each search content, when a photo search request input by a mobile terminal user is received, starting a first camera and a second camera on the mobile terminal, acquiring the user expression characteristics of a mobile terminal user through a first camera, calculating the matching degree of the user expression characteristics and the preset human face expression characteristics, when the calculated matching degree is in the preset matching degree range, acquiring a target shooting object image in the preset range in front of the second camera through the second camera, acquiring characters corresponding to the target shooting object image, inputting the acquired characters into a preset search box, therefore, the process of acquiring the search content is optimized, the automation degree of photographing search is improved, and the efficiency of photographing search is further improved.
The mobile terminal of the embodiment of the invention can be a mobile phone, a tablet computer or a learning machine. The steps implemented by the processor 40 in the mobile terminal 4 when executing the computer program 42 may specifically refer to the description of the method in the first embodiment, and are not described herein again.
Example five:
in an embodiment of the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps in the above-described respective input method embodiments of search content, for example, steps S101 to S104 shown in fig. 1. Alternatively, the computer program, when executed by a processor, implements the functions of the units in the device embodiments described above, for example, the functions of the units 21 to 24 shown in fig. 2 and the units 31 to 34 shown in fig. 3.
In the embodiment of the invention, when a photographing search request input by a mobile terminal user is received, a first camera and a second camera on the mobile terminal are started, the user expression characteristic of the mobile terminal user is obtained through the first camera, the matching degree of the user expression characteristic and the preset human face expression characteristic is calculated, when the calculated matching degree is in the preset matching degree range, a target shooting object image in the preset range in front of the second camera is obtained through the second camera, characters corresponding to the target shooting object image are obtained, and the obtained characters are input into a preset search frame, so that the process of obtaining search content is optimized, the automation degree of photographing search is improved, and the efficiency of photographing search is improved. The input method of the search content realized when the computer program is executed by the processor may further refer to the description of the steps in the foregoing method embodiments, and will not be described herein again.
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. An input method for searching contents, the method comprising the steps of:
when a photographing search request input by a mobile terminal user is received, starting a first camera and a second camera on the mobile terminal;
acquiring user expression characteristics of the mobile terminal user through the first camera, and calculating the matching degree of the user expression characteristics and preset human face expression characteristics;
when the calculated matching degree is within a preset matching degree range, acquiring a target shooting object image within a preset range in front of a second camera through the second camera, and performing inclination correction processing on the target shooting object image;
acquiring corresponding characters according to the target shooting object image subjected to tilt correction processing, inputting the acquired characters into a preset search box,
after the step of starting the first camera and the second camera on the mobile terminal, and before the step of obtaining the expression features of the user of the mobile terminal through the first camera, the method further comprises:
and projecting a photographing range identifier and an inclination identifier to the target shooting object through an identifier projector on the mobile terminal.
2. The method of claim 1, wherein the step of obtaining the user expressive features of the mobile terminal user via the first camera comprises:
and acquiring a facial image of the mobile terminal user through the first camera, and performing face recognition on the facial image to acquire the user expression characteristics of the mobile terminal user.
3. The method of claim 1, wherein the step of acquiring a text corresponding to the target photographic subject image and inputting the acquired text into a preset search box comprises:
performing character recognition on the target shooting object image to obtain characters included in the target shooting object image;
and inputting characters included in the target shooting object image into the preset search box.
4. The method of any of claims 1-3, wherein prior to the step of obtaining the user expressive features of the mobile terminal user via the first camera, the method further comprises:
and outputting the target shooting object image in front of the second camera to the mobile terminal user through a screen of the mobile terminal.
5. An input apparatus for searching contents, the apparatus comprising:
the camera starting unit is used for starting a first camera and a second camera on the mobile terminal when receiving a photographing search request input by a mobile terminal user;
the matching degree calculation unit is used for acquiring the user expression characteristics of the mobile terminal user through the first camera and calculating the matching degree of the user expression characteristics and preset human face expression characteristics;
the image acquisition unit is used for acquiring a target shooting object image in a preset range in front of the second camera through the second camera when the calculated matching degree is in a preset matching degree range, and performing inclination correction processing on the target shooting object image; and
a character input unit for acquiring corresponding characters according to the target photographic subject image subjected to tilt correction processing, inputting the acquired characters into a preset search box,
the input device is further configured to: after the step of starting the first camera and the second camera on the mobile terminal, before the step of acquiring the expression characteristics of the user of the mobile terminal user through the first camera, a photographing range mark and an inclination mark are projected to the target photographing object through a mark projector on the mobile terminal.
6. The apparatus of claim 5, wherein the matching degree calculation unit comprises:
and the feature acquisition unit is used for acquiring the facial image of the mobile terminal user through the first camera and carrying out face recognition on the facial image so as to acquire the user expression features of the mobile terminal user.
7. The apparatus of claim 5, wherein the text input unit comprises:
the character recognition unit is used for carrying out character recognition on the target shooting object image so as to obtain characters included in the target shooting object image; and
and the character input subunit is used for inputting characters included in the target shooting object image into the preset search box.
8. A mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201711192535.3A 2017-11-24 2017-11-24 Input method and device for search content, mobile terminal and storage medium Active CN108153811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711192535.3A CN108153811B (en) 2017-11-24 2017-11-24 Input method and device for search content, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711192535.3A CN108153811B (en) 2017-11-24 2017-11-24 Input method and device for search content, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN108153811A CN108153811A (en) 2018-06-12
CN108153811B true CN108153811B (en) 2020-05-26

Family

ID=62468237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711192535.3A Active CN108153811B (en) 2017-11-24 2017-11-24 Input method and device for search content, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN108153811B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102134396B1 (en) * 2018-11-15 2020-07-15 한국과학기술연구원 Portable image acquisition apparatus for plants
CN111385683B (en) * 2020-03-25 2022-01-28 广东小天才科技有限公司 Intelligent sound box application control method and intelligent sound box

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760367A (en) * 2016-01-29 2016-07-13 广东小天才科技有限公司 Real-time word translating method and device
CN107168536A (en) * 2017-05-19 2017-09-15 广东小天才科技有限公司 Examination question searching method, examination question searcher and electric terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104092932A (en) * 2013-12-03 2014-10-08 腾讯科技(深圳)有限公司 Acoustic control shooting method and device
CN105912717A (en) * 2016-04-29 2016-08-31 广东小天才科技有限公司 Image-based information search method and apparatus
CN106777363A (en) * 2017-01-22 2017-05-31 广东小天才科技有限公司 Take pictures searching method and the device of a kind of mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760367A (en) * 2016-01-29 2016-07-13 广东小天才科技有限公司 Real-time word translating method and device
CN107168536A (en) * 2017-05-19 2017-09-15 广东小天才科技有限公司 Examination question searching method, examination question searcher and electric terminal

Also Published As

Publication number Publication date
CN108153811A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
US10165201B2 (en) Image processing method and apparatus and terminal device to obtain a group photo including photographer
EP3179408A2 (en) Picture processing method and apparatus, computer program and recording medium
WO2022042776A1 (en) Photographing method and terminal
US10735697B2 (en) Photographing and corresponding control
US9838616B2 (en) Image processing method and electronic apparatus
WO2022028184A1 (en) Photography control method and apparatus, electronic device, and storage medium
WO2016188185A1 (en) Photo processing method and apparatus
WO2017124899A1 (en) Information processing method, apparatus and electronic device
WO2019119986A1 (en) Image processing method and device, computer readable storage medium, and electronic apparatus
US20160182816A1 (en) Preventing photographs of unintended subjects
CN108307120B (en) Image shooting method and device and electronic terminal
US10594930B2 (en) Image enhancement and repair using sample data from other images
WO2021169686A1 (en) Photo capture control method and apparatus and computer readable storage medium
CN103220465A (en) Method, device and mobile terminal for accurate positioning of facial image when mobile phone is used for photographing
WO2015032099A1 (en) Image processing method and terminal
CN108600610A (en) Shoot householder method and device
CN108153811B (en) Input method and device for search content, mobile terminal and storage medium
JP2015126326A (en) Electronic apparatus and image processing method
CN111833407A (en) Product rendering method and device
CN107343142A (en) The image pickup method and filming apparatus of a kind of photo
CN110868543A (en) Intelligent photographing method and device and computer readable storage medium
CN113012040B (en) Image processing method, image processing device, electronic equipment and storage medium
KR20170011876A (en) Image processing apparatus and method for operating thereof
CN108197141A (en) Topic method, apparatus, mobile terminal and storage medium are searched in taking pictures for mobile terminal
JP2014150348A (en) Imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant