CN112422829B - Method, device, terminal and storage medium for assisting in shooting image - Google Patents

Method, device, terminal and storage medium for assisting in shooting image Download PDF

Info

Publication number
CN112422829B
CN112422829B CN202011299190.3A CN202011299190A CN112422829B CN 112422829 B CN112422829 B CN 112422829B CN 202011299190 A CN202011299190 A CN 202011299190A CN 112422829 B CN112422829 B CN 112422829B
Authority
CN
China
Prior art keywords
image
assisting
target object
feedback information
current position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011299190.3A
Other languages
Chinese (zh)
Other versions
CN112422829A (en
Inventor
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202011299190.3A priority Critical patent/CN112422829B/en
Publication of CN112422829A publication Critical patent/CN112422829A/en
Application granted granted Critical
Publication of CN112422829B publication Critical patent/CN112422829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for assisting in capturing an image. The method for assisting in shooting the image comprises the following steps: acquiring an image captured by a camera device; determining the current position of the target object from the image; and generating first feedback information based on the relative position relationship between the current position and the target position. According to the method for assisting in shooting the image, the current position of the target object in the shooting picture is determined, and the first feedback information is generated based on the relative position relation between the target object and the target position, so that the blind user is guided to move the mobile terminal to the proper shooting position to shoot the target object, and the blind user is helped to improve the shooting quality.

Description

Method, device, terminal and storage medium for assisting in shooting image
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for assisting in capturing an image.
Background
Sharing photos and short videos has become a very popular social approach, but is very difficult for 1700 thousands of blind people in china to enjoy.
Some blind friends try to share photos or videos of the blind or the guide dog on a social platform, but a shooting object in a shooting picture is not completely framed, so that the quality of the shot video is low, and a method for assisting in shooting images is urgently needed.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
According to one or more embodiments of the present disclosure, there is provided a method for assisting in capturing an image, applied to a mobile terminal having a camera device, the method including:
acquiring an image captured by the camera device;
determining a current position of a target object from the image; and
and generating first feedback information based on the relative position relation between the current position and the target position.
According to one or more embodiments of the present disclosure, there is provided an apparatus for assisting in photographing an image, including:
an image acquisition unit configured to acquire an image captured by the image pickup device;
a current position determining unit for determining a current position of the target object from the image; and
and the information feedback unit is used for generating first feedback information based on the relative position relation between the current position and the target position.
According to one or more embodiments of the present disclosure, there is provided a mobile terminal including:
at least one memory and at least one processor;
wherein the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method for assisting in shooting the image, provided according to one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer storage medium storing program code which, when run on a computer device, causes the computer device to perform a method of assisting in capturing an image provided according to one or more embodiments of the present disclosure.
According to the method for assisting in shooting the image, the current position of the target object in the shooting picture is determined, and the first feedback information is generated based on the relative position relation between the target object and the target position, so that the blind user is guided to move the mobile terminal to the proper shooting position to shoot the target object, and the blind user is helped to improve the shooting quality.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of a method for assisting in capturing an image according to an embodiment of the present disclosure;
2a, 2b show usage scene diagrams of the shooting method provided according to the embodiment of the disclosure;
fig. 3 is a flowchart of a method of assisting in capturing an image according to another embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of an external force feedback device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus for assisting in capturing an image according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a terminal device for implementing an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the steps recited in the apparatus embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Moreover, device embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". The term "responsive to" and related terms mean that one signal or event is affected to some extent, but not necessarily completely or directly, by another signal or event. If an event x occurs "in response" to an event y, x may respond directly or indirectly to y. For example, the occurrence of y may ultimately result in the occurrence of x, but other intermediate events and/or conditions may exist. In other cases, y may not necessarily result in the occurrence of x, and x may occur even though y has not already occurred. Furthermore, the term "responsive to" may also mean "at least partially responsive to". The term "determining" broadly encompasses a wide variety of actions that can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like, and can also include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like, as well as resolving, selecting, choosing, establishing and the like. Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
For the purposes of this disclosure, the phrase "a and/or B" means (a), (B), or (a and B).
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, fig. 1 shows a flowchart of a method 100 for assisting in capturing an image, which includes steps S101 to S103:
step S101: an image captured by the camera is acquired.
In this step, the acquired images include, but are not limited to, a through image of the imaging apparatus in a photographing or imaging mode, an image being captured, and an image that has been captured.
Step S102: the current position of the target object is determined from the image.
Wherein the target object may be determined based on preset settings of the system or the user, real-time instructions of the user, or instructions received from a remote server. For example, the user may issue a voice instruction about the target object after turning on the camera of the mobile terminal to designate the target object.
The current position of the target object is the position of the target object in the acquired image. In some embodiments, the position of the target object in the image may be determined from the image based on a pre-trained image recognition model, which may optionally be a convolutional neural network model (CNN). The convolutional neural network is constructed by imitating a biological visual perception mechanism, and is provided with a convolutional layer, a pooling layer and a complete connection layer, the parameter sharing of convolutional kernels in the implicit layer and the sparsity of interlayer connection enable the convolutional neural network to learn lattice characteristics (such as pixels) with smaller calculation amount, have stable effect and no additional characteristic engineering requirements on data, and are suitable for image and voice recognition tasks.
Step S103: and generating first feedback information based on the relative position relation between the current position and the target position.
The target position may be preset or determined according to an operation instruction issued by a user, for example, a central position of the viewfinder image or the video image may be preset as the target position, or the user may issue a real-time voice instruction to specify a middle lower position of the viewfinder image or the video image as the target position; the relative positional relationship of the current position and the target position may include a relative direction and a distance of separation.
In some embodiments, a range distance of the current position of the target object and the target position and a relative direction of the target position with respect to the current position may be calculated, and the first feedback information may be determined based on the calculated range distance and relative direction.
In this embodiment, the first feedback information is used to remind the user to move the mobile terminal to a proper shooting position so that the target object approaches to the set target position in the image. The first feedback information may include voice information, tactile information, or other information perceptible to the blind user, or an electrical signal used to generate such information. The tactile information includes, but is not limited to, vibration, micro-current, mechanical movement, etc. which can be sensed by the user through the sense of touch.
Preferably, the first feedback information is tactile feedback information. Compared with voice feedback information, the touch feedback information can enable the blind user to feel how to move the mobile terminal more intuitively, enables the user to react more quickly, is not interfered by noise, and can be suitable for the blind user with hearing impairment. Exemplarily, fig. 2a illustrates a usage scenario diagram of a method for assisting in capturing an image according to an embodiment of the present disclosure. Referring to fig. 2a, an image 210 captured by a camera (not shown) is currently displayed on a screen of the mobile terminal 200, a target position 220 is previously set to the center of the screen or image, and a current position of a target object 230 is recognized to be located at the lower right of the target position 220. At this time, first feedback information indicating that the user moves the mobile terminal 200 to the lower right or to the rear may be generated based on the relative positional relationship of the target position and the current position shown in fig. 2 a.
In this way, according to the method for assisting in shooting the image, the current position of the target object in the shooting picture is determined, and the first feedback information is generated based on the relative position relationship between the target object and the target position, so that the blind user is guided to move the mobile terminal to the proper shooting position to shoot the target object, and the blind user is helped to improve the shooting quality.
In some embodiments, the method 100 further comprises: and if the target object is not detected from the image, generating second feedback information. The second feedback information may include voice information, tactile information, or an electrical signal for generating the above information, which is transmitted to the user. For example, the second feedback information may be voice information, such as "no target object is shot, please move the mobile phone". Therefore, the blind user can be reminded through the second feedback information when the object to be shot does not enter the shooting picture.
In some embodiments, the method 100 further comprises: acquiring the size of the target object in the image; step S103 includes: and generating first feedback information based on the relative position relation between the current position and the target position and the size of the target object in the image.
In some embodiments, the size of the target object in the image comprises a numerical value of an area, height or width of the target object in the image or a ratio relative to an area, height or width of the terminal screen.
Referring to fig. 2b, fig. 2b is a schematic diagram illustrating another usage scenario of the photographing method provided according to the embodiment of the present disclosure, where an image 210 captured by a camera (not shown) is currently displayed on a screen of the mobile terminal 200, a target position 220 is preset at the center of the screen or the image, a current position of a target object 240 is identified as being located at the lower right of the target position 220, and the target object 240 occupies a larger format and has most of non-drawings. At this time, if the user is only reminded to move the mobile phone to the lower right, even if the target object 240 is finally located at the center of the image 210, it still has a part of the missed drawing, and thus the user needs to be reminded to move the mobile terminal backward to draw the target object 240 completely. Therefore, in this embodiment, the size of the target object in the image is obtained and the first feedback information is determined based on the size of the target object, for example, if the target object is larger in the image, the user may be prompted to move the mobile terminal backward; if the target object is less in the image, the user may be prompted to move the mobile terminal forward. Therefore, according to the method for assisting in shooting the image, the blind user can be guided to locate the target object at the target position of the shooting picture, and the blind user can be kept in a proper picture proportion in the picture, so that the blind user is further helped to improve the shooting quality.
In some embodiments, the first feedback information includes information indicating a moving direction and information indicating a moving distance, so that a blind user can be assisted to move the mobile terminal to a proper photographing position more quickly and accurately.
In some embodiments, step S103 comprises:
step A1: determining terminal movement information based on the current position and a preset target position; the terminal moving information comprises a moving direction and a moving distance of the terminal;
step A2: and determining the first feedback information according to the terminal mobile information.
In the embodiment, the terminal movement information reflecting the actual required movement distance and direction of the terminal is calculated according to the relative position relation between the current position and the target position in the image, so that the determined first feedback information generated based on the terminal movement information accurately guides the blind user.
Referring to fig. 3, fig. 3 illustrates a method 300 of assisting in capturing an image, provided according to another embodiment of the present disclosure, including steps S301 to S308:
step S301: starting a camera device of the mobile terminal;
step S302: acquiring an image captured by the camera device;
step S303: determining a target object in response to a voice instruction of a user;
step S304: determining a target position in response to a voice instruction of a user;
step S305: determining whether a target object is in the image; if yes, go to step S307, otherwise go to step S306: generating second feedback information;
step S307: determining terminal movement information based on the current position and the target position; the terminal moving information comprises a moving direction and a moving distance of the terminal;
step S308: and determining the first feedback information according to the terminal mobile information.
In some embodiments, the mobile terminal is connected to an external force feedback device configured to implement the first feedback information to alert the user.
The external force feedback device can be arranged on the mobile terminal as a component of the mobile terminal, and also can be connected with the mobile terminal in a wired or wireless mode as a separate accessory. In this embodiment, the external force feedback device can indicate the moving direction and the moving distance according to the force feedback direction and the force magnitude, so that the tactile feedback is more visual and accurate for the blind users. For example, the external force feedback device may be a magnetic force feedback type joystick, which includes a permanent magnet and 4 coils generating corresponding magnetic forces according to input currents, wherein the permanent magnet performs corresponding movements according to the magnitude and direction of the magnetic forces of the coils and drives the joystick arranged thereon to move. The magnetic feedback type rocker has the characteristics of miniaturization and portability, and can be conveniently held by a user when being arranged on the mobile terminal.
Illustratively, referring to fig. 4, in some embodiments, the external force feedback device is a joystick 210 disposed on the back of the mobile terminal 200 so that the user can just grip with the index and middle fingers when holding the phone.
Accordingly, as shown in fig. 5, an embodiment of the present disclosure provides an apparatus 400 for assisting in capturing an image, including:
an image acquisition unit 410 for acquiring an image captured by the image pickup device;
a current position determining unit 420 for determining a current position of the target object from the image; and
an information feedback unit 430, configured to generate first feedback information based on a relative position relationship between the current position and the target position.
In this way, according to the device for assisting in shooting the image, which is provided by the embodiment of the disclosure, the blind user can be guided to move the mobile terminal to a proper shooting position to shoot the target object by determining the current position of the target object in the shooting picture and generating the first feedback information based on the relative position relationship between the target object and the target position, so that the blind user is helped to improve the shooting quality.
For the embodiments of the apparatus, since they correspond substantially to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described apparatus embodiments are merely illustrative, in that modules illustrated as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
In some embodiments, the apparatus 400 further comprises:
and the target object determining unit is used for determining the target object in response to a voice instruction of a user.
In some embodiments, the apparatus 400 further comprises:
and the target position determining unit is used for determining the target position in response to a voice instruction of a user.
In some embodiments, the apparatus 400 further comprises:
and a second information generation unit configured to generate second feedback information if the target object is not detected from the image. The second feedback information may include voice information, tactile information, or an electrical signal for generating the above information, which is transmitted to the user. For example, the second feedback information may be voice information, such as "no target object is shot, please move the mobile phone". Therefore, the blind user can be reminded through the second feedback information when the object to be shot does not enter the shooting picture.
In some embodiments, the apparatus 400 further comprises: a size acquisition unit configured to acquire a size of the target object in the image; the information feedback unit 430 is further configured to generate first feedback information based on the relative position relationship between the current position and the target position and the size of the target object in the image.
In some embodiments, the information feedback unit 430 is further configured to determine terminal movement information based on the current location and the target location, and determine the first feedback information according to the terminal movement information; the terminal moving information comprises a moving direction and a moving distance of the terminal.
Accordingly, in accordance with one or more embodiments of the present disclosure, there is provided a mobile terminal including:
at least one memory and at least one processor;
the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method for assisting in shooting the image, wherein the method is provided according to one or more embodiments of the disclosure.
Accordingly, according to one or more embodiments of the present disclosure, a non-transitory computer storage medium is provided, storing program code which, when run on a computer device, causes the computer device to perform a method of assisting in capturing an image provided according to one or more embodiments of the present disclosure.
Fig. 6 shows a schematic structural diagram of a terminal device 800 (e.g., the display device shown in fig. 3) for implementing an embodiment of the disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, devices such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), and the like. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the terminal device 800 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the terminal apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. For example, the storage 808 may store a first database and a second database, wherein the first database stores at least one first sub-program identifier of a first program; the second database stores at least one second sub-program identification of the first program. The communication means 809 may allow the terminal apparatus 800 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 6 illustrates a terminal apparatus 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for executing an apparatus illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the apparatus of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device.
The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring an image captured by the camera device; determining a current position of a target object from the image; and generating first feedback information based on the relative position relation between the current position and the target position.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, apparatuses, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Here, the name of a unit does not constitute a limitation of the unit itself in some cases, and for example, an image acquisition unit may be described as "a unit for acquiring an image captured by the image pickup device".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a method for assisting in capturing an image, applied to a mobile terminal having a camera device, the method including: acquiring an image captured by the camera device; determining a current position of a target object from the image; and generating first feedback information based on the relative position relation between the current position and the target position.
According to one or more embodiments of the present disclosure, there is provided a method of assisting in capturing an image, further including: the target object is determined in response to a voice instruction of a user.
According to one or more embodiments of the present disclosure, there is provided a method of assisting in capturing an image, further including: the target location is determined in response to a voice instruction of a user.
According to one or more embodiments of the present disclosure, there is provided a method of assisting in capturing an image, further including: and if the target object is not detected from the image, generating second feedback information.
According to one or more embodiments of the present disclosure, there is provided a method of assisting in photographing an image, further including: acquiring the size of the target object in the image; the generating of the first feedback information based on the relative position relationship between the current position and the target position includes: and generating first feedback information based on the relative position relation and the size of the target object in the image.
According to one or more embodiments of the present disclosure, the first feedback information includes information indicating a moving direction and information indicating a moving distance.
According to one or more embodiments of the present disclosure, the first feedback information includes haptic feedback information.
According to one or more embodiments of the present disclosure, the mobile terminal is connected with an external force feedback device configured to implement the first feedback information to alert a user.
According to one or more embodiments of the present disclosure, the external force feedback device is disposed on the back of the mobile terminal.
According to one or more embodiments of the present disclosure, the generating of the first feedback information based on the relative positional relationship between the current position and the target position includes: determining terminal movement information based on the current position and the target position; determining the first feedback information according to the terminal mobile information; the terminal moving information comprises a moving direction and a moving distance of the terminal.
According to one or more embodiments of the present disclosure, there is provided an apparatus for assisting in photographing an image, including: an image acquisition unit configured to acquire an image captured by the imaging device; a current position determining unit for determining a current position of the target object from the image; and the information feedback unit is used for generating first feedback information based on the relative position relation between the current position and the target position.
According to one or more embodiments of the present disclosure, there is provided a mobile terminal including: at least one memory and at least one processor; wherein the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method for assisting in shooting the image, provided according to one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer storage medium storing program code which, when run on a computer device, causes the computer device to perform a method of assisting in capturing an image provided according to one or more embodiments of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or logical acts of devices, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for assisting in shooting images is applied to a mobile terminal with a camera device, and is characterized in that the mobile terminal is connected with an external force feedback device, and the method comprises the following steps:
acquiring an image captured by the camera device;
determining a current position of a target object from the image; and
generating first feedback information based on a relative position relationship between the current position and a target position, wherein the first feedback information comprises information used for indicating a moving direction and information used for indicating a moving distance;
wherein the external force feedback device is configured to implement the first feedback information to alert a user; the external force feedback device can indicate the moving direction and the moving distance through the direction and the size of the force.
2. The method of assisting in capturing an image according to claim 1, further comprising:
the target object is determined in response to a voice instruction of a user.
3. The method of assisting in capturing an image according to claim 1, further comprising:
the target location is determined in response to a voice instruction of a user.
4. The method of assisting in capturing an image according to claim 1, further comprising:
and if the target object is not detected from the image, generating second feedback information.
5. The method of assisting in capturing an image according to claim 1, further comprising:
acquiring the size of the target object in the image;
the generating of the first feedback information based on the relative position relationship between the current position and the target position includes: and generating first feedback information based on the relative position relation and the size of the target object in the image.
6. The method of assisting in capturing an image according to claim 1,
the external force feedback device is arranged on the back of the mobile terminal.
7. The method of assisting in capturing an image according to claim 1, wherein the generating first feedback information based on the relative positional relationship between the current position and the target position includes:
determining terminal movement information based on the current position and the target position;
determining the first feedback information according to the terminal mobile information;
the terminal moving information comprises a moving direction and a moving distance of the terminal.
8. An apparatus for assisting in capturing an image, the apparatus comprising:
an image acquisition unit configured to acquire an image captured by the image pickup device;
a current position determining unit for determining a current position of the target object from the image; and
an information feedback unit, configured to generate first feedback information based on a relative position relationship between the current position and a target position, where the first feedback information includes information indicating a moving direction and information indicating a moving distance;
wherein the first feedback information is implemented by an external force feedback device; the external force feedback device can indicate the moving direction and the moving distance through the direction and the size of the force.
9. A mobile terminal, comprising:
at least one memory and at least one processor;
wherein the memory is configured to store program code and the processor is configured to call the program code stored in the memory to perform the method of any of claims 1 to 7.
10. A non-transitory computer storage medium, characterized in that,
the non-transitory computer storage medium stores program code that, when run on a computer device, causes the computer device to perform the method of any of claims 1 to 7.
CN202011299190.3A 2020-11-19 2020-11-19 Method, device, terminal and storage medium for assisting in shooting image Active CN112422829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011299190.3A CN112422829B (en) 2020-11-19 2020-11-19 Method, device, terminal and storage medium for assisting in shooting image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011299190.3A CN112422829B (en) 2020-11-19 2020-11-19 Method, device, terminal and storage medium for assisting in shooting image

Publications (2)

Publication Number Publication Date
CN112422829A CN112422829A (en) 2021-02-26
CN112422829B true CN112422829B (en) 2022-04-26

Family

ID=74773758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011299190.3A Active CN112422829B (en) 2020-11-19 2020-11-19 Method, device, terminal and storage medium for assisting in shooting image

Country Status (1)

Country Link
CN (1) CN112422829B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086538B (en) * 2021-03-15 2024-03-15 北京字跳网络技术有限公司 Shooting position determining method, device, equipment and medium
CN113096194B (en) * 2021-05-08 2024-03-26 北京字节跳动网络技术有限公司 Method, device, terminal and non-transitory storage medium for determining time sequence
CN113507564B (en) * 2021-07-10 2022-07-08 广州岸边网络科技有限公司 Blind person's camera discernment auxiliary system
CN113792580B (en) * 2021-08-02 2023-11-03 日立楼宇技术(广州)有限公司 Auxiliary shooting system, method and device for escalator and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404806A (en) * 2001-09-17 2003-03-26 精工爱普生株式会社 Blindman walking aid
CN202916742U (en) * 2012-11-20 2013-05-01 联想(北京)有限公司 Control device
CN104469121A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information processing method and electronic equipment
CN105100610A (en) * 2015-07-13 2015-11-25 小米科技有限责任公司 Self-photographing prompting method and device, selfie stick and self-photographing prompting system
CN107257440A (en) * 2017-07-31 2017-10-17 深圳回收宝科技有限公司 It is a kind of to detect method, equipment and storage medium that video tracking is shot
CN207445554U (en) * 2017-09-27 2018-06-05 深圳市本途科技有限公司 A kind of Portable mobile phone rocking bar
CN109144274A (en) * 2018-09-12 2019-01-04 吉林大学 Instruct system and control method in the force feedback direction that centroid motion and vibration combine
CN110086992A (en) * 2019-04-29 2019-08-02 努比亚技术有限公司 Filming control method, mobile terminal and the computer storage medium of mobile terminal
CN111026269A (en) * 2019-12-04 2020-04-17 上海褚信医学科技有限公司 Haptic feedback method, device and equipment of biological tissue structure based on force feedback
CN111163261A (en) * 2019-12-25 2020-05-15 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium
CN111330263A (en) * 2020-02-28 2020-06-26 歌尔科技有限公司 Game paddle and rocker feedback force adjusting device thereof
CN111610826A (en) * 2020-05-27 2020-09-01 杭州广里科技有限公司 Wisdom computer nurse board with remind function in real time

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060024647A1 (en) * 2004-07-30 2006-02-02 France Telecom Method and apparatus for communicating graphical information to a visually impaired person using haptic feedback
US8588464B2 (en) * 2007-01-12 2013-11-19 International Business Machines Corporation Assisting a vision-impaired user with navigation based on a 3D captured image stream
US8151188B2 (en) * 2008-07-23 2012-04-03 General Electric Company Intelligent user interface using on-screen force feedback and method of use
US9335181B2 (en) * 2010-11-10 2016-05-10 Qualcomm Incorporated Haptic based personal navigation
US9584774B2 (en) * 2011-10-24 2017-02-28 Motorola Solutions, Inc. Method and apparatus for remotely controlling an image capture position of a camera
US9715300B2 (en) * 2013-03-04 2017-07-25 Microsoft Technology Licensing, Llc Touch screen interaction using dynamic haptic feedback
CN105748265B (en) * 2016-05-23 2021-01-22 京东方科技集团股份有限公司 Navigation device and method
US10238571B2 (en) * 2016-06-22 2019-03-26 Toyota Motor Engineering & Manufacturing North America, Inc. Vision-assist devices and methods of calibrating image data of a vision-assist device
CN106969772B (en) * 2017-04-10 2020-03-31 南京大学 Guide dog method based on mobile phone platform
CN107770312A (en) * 2017-11-07 2018-03-06 广东欧珀移动通信有限公司 Method for information display, device and terminal
CN108548532A (en) * 2017-12-27 2018-09-18 达闼科技(北京)有限公司 Blind man navigation method, electronic equipment and computer program product based on cloud
US10839636B2 (en) * 2019-01-15 2020-11-17 Igt Programmable haptic force feedback sensations in electronic wagering games
US10630896B1 (en) * 2019-02-14 2020-04-21 International Business Machines Corporation Cognitive dynamic photography guidance and pose recommendation
CN111599459A (en) * 2020-05-15 2020-08-28 京东方科技集团股份有限公司 Control method and control device for remote surgery and surgery system
CN111831115A (en) * 2020-06-28 2020-10-27 深圳市罗伯医疗科技有限公司 Manipulator device control method, upper computer, electronic equipment and storage medium
CN111942285B (en) * 2020-07-10 2022-11-18 夏牧谣 Intelligent vision-impaired person service method and system based on vehicle-mounted glass vibration feedback

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404806A (en) * 2001-09-17 2003-03-26 精工爱普生株式会社 Blindman walking aid
CN202916742U (en) * 2012-11-20 2013-05-01 联想(北京)有限公司 Control device
CN104469121A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information processing method and electronic equipment
CN105100610A (en) * 2015-07-13 2015-11-25 小米科技有限责任公司 Self-photographing prompting method and device, selfie stick and self-photographing prompting system
CN107257440A (en) * 2017-07-31 2017-10-17 深圳回收宝科技有限公司 It is a kind of to detect method, equipment and storage medium that video tracking is shot
CN207445554U (en) * 2017-09-27 2018-06-05 深圳市本途科技有限公司 A kind of Portable mobile phone rocking bar
CN109144274A (en) * 2018-09-12 2019-01-04 吉林大学 Instruct system and control method in the force feedback direction that centroid motion and vibration combine
CN110086992A (en) * 2019-04-29 2019-08-02 努比亚技术有限公司 Filming control method, mobile terminal and the computer storage medium of mobile terminal
CN111026269A (en) * 2019-12-04 2020-04-17 上海褚信医学科技有限公司 Haptic feedback method, device and equipment of biological tissue structure based on force feedback
CN111163261A (en) * 2019-12-25 2020-05-15 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium
CN111330263A (en) * 2020-02-28 2020-06-26 歌尔科技有限公司 Game paddle and rocker feedback force adjusting device thereof
CN111610826A (en) * 2020-05-27 2020-09-01 杭州广里科技有限公司 Wisdom computer nurse board with remind function in real time

Also Published As

Publication number Publication date
CN112422829A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112422829B (en) Method, device, terminal and storage medium for assisting in shooting image
KR102538164B1 (en) Image processing method and device, electronic device and storage medium
US11544820B2 (en) Video repair method and apparatus, and storage medium
CN109829863B (en) Image processing method and device, electronic equipment and storage medium
CN110060215B (en) Image processing method and device, electronic equipment and storage medium
CN111340731B (en) Image processing method and device, electronic equipment and storage medium
EP3160105A1 (en) Method and device for pushing information
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN111459364B (en) Icon updating method and device and electronic equipment
CN111290819A (en) Method and device for displaying operation prompt and electronic equipment
CN113139484B (en) Crowd positioning method and device, electronic equipment and storage medium
CN110675355B (en) Image reconstruction method and device, electronic equipment and storage medium
CN112351221B (en) Image special effect processing method, device, electronic equipment and computer readable storage medium
CN112218034A (en) Video processing method, system, terminal and storage medium
CN111586295B (en) Image generation method and device and electronic equipment
CN109635926B (en) Attention feature acquisition method and device for neural network and storage medium
CN110769129B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111290692B (en) Picture display method and device, electronic equipment and computer readable medium
CN113253847A (en) Terminal control method and device, terminal and storage medium
CN114067085A (en) Virtual object display method and device, electronic equipment and storage medium
CN112492230A (en) Video processing method and device, readable medium and electronic equipment
CN111650554A (en) Positioning method and device, electronic equipment and storage medium
CN112734015B (en) Network generation method and device, electronic equipment and storage medium
CN112804457B (en) Photographing parameter determination method and device and electronic equipment
CN111435431A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant