WO2018076182A1 - 一种图像拍摄方法及装置 - Google Patents
一种图像拍摄方法及装置 Download PDFInfo
- Publication number
- WO2018076182A1 WO2018076182A1 PCT/CN2016/103296 CN2016103296W WO2018076182A1 WO 2018076182 A1 WO2018076182 A1 WO 2018076182A1 CN 2016103296 W CN2016103296 W CN 2016103296W WO 2018076182 A1 WO2018076182 A1 WO 2018076182A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- photographed
- feature information
- target object
- terminal device
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
Definitions
- the present invention relates to the field of image acquisition, and in particular, to an image capturing method and apparatus.
- smart terminal applications with camera functions are becoming more and more popular, such as smart phones and tablets.
- the smart terminal camera function is continuously improved, and various algorithms are used to make the photos more vivid and excellent.
- the existing smart terminal image capturing method is mainly: the user activates the photographing function of the smart terminal, and selects an area as a face target area in the preview screen presented on the smart terminal display interface; the smart device determines the collected face The error value of the position of the center point in the preview picture and the position of the center point of the face target area is smaller than the first preset threshold, and the area of the collected face in the preview picture and the face target area When the error value of the area is smaller than the second preset threshold, the photographing is triggered.
- an error value based on a position of a center point of the collected face in the preview picture and a position of a center point of the face target area, and an area of the collected face in the preview picture and a person The error value of the area of the face target area is used to determine whether the face object is in the face target area, and the recognized face may not be the target subject, resulting in low recognition accuracy of the target subject.
- An embodiment of the present invention provides an image capturing method for solving the problem of recognizing a target photographic subject by using a recognition method based on a position error value and an area error value in the prior art, resulting in a low recognition accuracy.
- an embodiment of the present invention provides an image capturing method, which can be applied to a terminal device, including:
- the feature information triggers photographing when determining whether the feature information of the image in the target area includes the feature information of the target object to be photographed. After the shooting is triggered, a captured image is obtained, and the terminal device can be saved in the storage structure of the terminal device, and can also be uploaded to the cloud storage structure.
- the feature information of the object to be photographed in the image of the object to be photographed may be extracted by using the image recognition technology, and the image recognition technology may be
- the image recognition technology may be
- the statistical pattern recognition, the structure pattern recognition, or the fuzzy pattern recognition other identification algorithms may be used, and the embodiments of the present invention are not specifically limited herein.
- the feature information of the target object is extracted, and the feature information of the image in the target area preset by the user is determined to include the feature information of the target object, thereby determining whether the target object is in the Within the target area.
- the identification method based on the position error value and the area error value is adopted, and the target object feature recognition method is more effective in determining whether the target object is in the target area, and the accuracy is high.
- the terminal device when the terminal device acquires an image including a target object to be photographed, the terminal device may be implemented by the image device after the image collector set by the terminal device is activated. The device collects an image including the target object to be photographed.
- the image capture device of the terminal device can obtain an image including the target object to be photographed from the image saved by the terminal device.
- the determining, by the terminal device, that the feature information of the image in the target area includes the feature information of the target object to be photographed includes:
- the terminal device determines that the coordinate positions of the pixel points of the target object to be photographed are all within the coordinate range of the target region, determining that the feature information of the image in the target region includes the Feature information of the target object to be photographed.
- the identification method based on the position error value and the area error value is adopted, and the target object feature recognition and the target object position recognition method are more effective in determining whether the target object is in the target area, and the accuracy is high.
- the determining, by the terminal device, that the feature information of the image in the target area includes the feature information of the target object to be photographed includes:
- the terminal device determines that the feature information of the image in the target region is greater than a preset threshold, and the terminal device determines that the feature information of the image in the target region includes the feature of the target object to be photographed. information.
- the preset threshold may take a value of 95%.
- Determining whether feature information of an image in the target area includes feature information of the target object to be photographed by matching feature information of an image in the target area with feature information of the target object to be photographed
- the identification method based on the position error value and the area error value is adopted, and the target object feature recognition method is more effective in determining whether the target object is in the target area, and the accuracy is high.
- the terminal device determines a target area in a preview screen presented by the display interface of the terminal device, including:
- the location information is a coordinate range of at least one adjacent region selected by the user among the N regions into which the preview screen is divided, and the N is a positive integer greater than or equal to 2.
- the user can select only one area, and the embodiment of the present invention can also select two or more areas as the target area, and the range selected when the user selects the target area increases.
- the limitations are small.
- an embodiment of the present invention provides an image capturing apparatus, where the apparatus is applied to a terminal device, including:
- An acquiring module configured to acquire an image including a target object to be photographed
- a feature extraction module configured to extract feature information of the target object to be photographed in the image that includes the object to be photographed acquired by the acquiring module
- a regional setting module configured to determine a target area in a preview screen presented on a display interface of the terminal device
- the matching module is configured to determine feature information of the image in the target area set by the area setting module, and include feature information of the target object to be photographed extracted by the feature extraction module, and trigger shooting.
- the obtaining module is specifically configured to:
- an image including the target object to be photographed is acquired by the image collector.
- the matching module is specifically configured to:
- the matching module is specifically configured to:
- Determining that the feature information of the image in the target area includes the feature of the target object to be photographed if the feature information of the image in the target area and the feature information of the target object to be photographed are greater than a preset threshold. information.
- the device further includes:
- a receiving module the user receiving a locale setting instruction sent by the user for setting the target area
- the area setting module is configured to: set, according to the location information carried in the area setting instruction received by the receiving module, a corresponding area in the preview screen as the target area, where the location information is a user a coordinate range of at least one adjacent one selected among N regions into which the preview screen is divided, wherein N is a positive integer greater than or equal to 2.
- the feature information of the target object is extracted, and the feature information of the image in the target area preset by the user is determined to include the feature information of the target object, thereby determining whether the target object is in the Within the target area.
- the identification method based on the position error value and the area error value is adopted, and the target object feature recognition method is more effective in determining whether the target object is in the target area, and the accuracy is high.
- the embodiment of the present invention further provides an apparatus for image capturing, including:
- Processor memory, display, and image capture.
- the display is for displaying an image.
- the memory is used to store the program code that the processor needs to execute.
- the image collector is used to capture images.
- the processor is configured to execute the program code stored in the memory, specifically for performing the method described in any one of the first aspect or the first aspect.
- the embodiment of the present invention further provides a computer readable storage medium, configured to store computer software instructions for performing the functions of any of the foregoing first aspect, the first aspect, including The program designed by the method of any one of the above first aspect and the first aspect.
- FIG. 1 is a schematic diagram of an information identification process according to an embodiment of the present invention
- FIG. 2 is a flowchart of an image capturing method according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of a method for determining a target area according to an embodiment of the present invention
- FIG. 4 is a schematic diagram of another method for determining a target area according to an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a preferred implementation manner of a terminal according to an embodiment of the present invention.
- An embodiment of the present invention provides an image capturing method for solving the problem of recognizing a target photographic subject by using a recognition method based on a position error value and an area error value in the prior art, resulting in a low recognition accuracy.
- the method and the device are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
- Scenes that can be applied in the embodiments of the present invention include, but are not limited to, self-timer, framing, shooting moving objects, and the like.
- the solution provided by the embodiment of the present invention can be applied to a terminal device.
- the terminal device has a display device capable of displaying image information, such as a display, a projector, etc., and has an input device such as a keyboard, a cursor control device such as a mouse, a touchpad, and the like.
- the terminal device also has an image collector for acquiring an image or capturing an image.
- the terminal device can be a desktop computer, a mobile computer, a tablet computer, a smart camera, or the like.
- the preview screen is a screen presented on the terminal device display interface after the terminal device starts the image collector set by the terminal device, and the image collector may be a camera of the terminal device.
- composition is to determine and arrange the location of the target object to produce a more aesthetic image.
- composition position is the position of the shooting target object in the image to make the image more beautiful.
- Image recognition is a process of recognizing a person or an object based on feature information of a person or an object.
- Image recognition technology is based on the main features of a person or object in an image when recognizing a person or object, such as the letter A has a sharp point, P has a circle, and the center of Y has an acute angle.
- the redundant information input must be excluded, and key feature information is extracted.
- information is obtained.
- information can be obtained through sensors.
- the acquired information may be a two-dimensional image such as a character, an image, or the like; it may be a one-dimensional waveform such as a sound wave, an electrocardiogram, or an electroencephalogram; or may be a physical quantity and a logical value.
- pre-processing When pre-processing the obtained information, it can be implemented by at least one of the following processing methods:
- Analog signal ⁇ digital signal (English: Analog ⁇ Digital, abbreviation: A ⁇ D) conversion processing, binarization processing, image smoothing processing, image enhancement processing, image filtering processing, and the like.
- Feature selection is performed on the preprocessed information. For example, a 64x64 image can obtain 4096 data. The process of feature extraction and selection is to obtain the feature that best reflects the essence of the classification in the feature space by transforming the acquired original data.
- the classification process can be implemented by classifier design or classification decisions.
- the main function of the classifier design is to determine the decision rules through training so that the error rate is lowest when classifying such decision rules.
- the classification decision is to classify the identified objects in the feature space.
- FIG. 2 is a flowchart of an image capturing method according to an embodiment of the present invention. The method is performed by an intelligent terminal, and specifically includes the following:
- the terminal device acquires an image that includes a target object to be photographed, and extracts feature information of the target object to be photographed in the image that includes the target object to be photographed.
- the terminal device determines a target area in a preview screen presented by a display interface of the terminal device.
- the terminal device determines, according to the feature information of the image in the target area, the The shooting is triggered when the feature information of the target object is to be captured.
- the feature information of the object to be photographed in the image of the object to be photographed is extracted in step S101
- the feature information may be extracted by the image recognition technology, and the image recognition is performed.
- the technology may be a statistical pattern recognition, a structural pattern recognition or a fuzzy pattern recognition, and may also be other identification algorithms, which are not specifically limited herein.
- steps S101 and S102 are not strictly sequential, and the step S101 may be performed first, and then the step S102 may be performed, or the step S102 may be performed first, and then the step S101 is performed.
- the terminal device starts an image collector set by the terminal device.
- the feature information of the target object is extracted, and the feature information of the image in the target area preset by the user is determined to include the feature information of the target object, thereby determining whether the target object is in the Within the target area.
- the identification method based on the position error value and the area error value is adopted, and the target object feature recognition method is more effective in determining whether the target object is in the target area, and the accuracy is high.
- the terminal device when the terminal device acquires an image that includes the target object to be photographed, the terminal device may be implemented by any one of the following methods:
- the terminal device collects an image including the object to be photographed by the image collector.
- the terminal device may also acquire an image including the object to be photographed in a database including the image of the object to be photographed in the terminal device.
- the database may be a local database or a cloud database, and the embodiment of the present invention is not specifically limited herein.
- the terminal device may be implemented by any one of the following methods:
- the terminal device acquires a coordinate position of a pixel point of the target object to be photographed in the preview picture in the preview picture.
- the terminal device determines that the coordinate position of the pixel point of the target object to be photographed is within the coordinate range of the target area, determining that the feature information of the image in the target area includes the target object to be photographed Characteristic information.
- the terminal device When the terminal device acquires the coordinate position of the pixel of the target object to be photographed in the preview image in the preview image, the terminal device may be implemented as follows:
- the similarity between the feature information and the feature information of the target object to be photographed is greater than a preset matching threshold.
- the preset matching threshold is greater than 0 and less than or equal to 1, for example, 90% to 100%.
- the preset matching threshold may be 95%.
- a matching algorithm based on a squared sum of pixel differences may be used, or a least squares image matching algorithm may be used, and other matching algorithms may also be used.
- the embodiment of the present invention is not specifically limited herein.
- the terminal device acquires feature information of an image in the target area in the preview image, and matches feature information of an image in the target area with feature information of the target object to be photographed;
- the terminal device determines that the feature information of the image in the target area includes the Feature information of the target object to be photographed.
- the preset matching threshold is greater than 0 or less than or equal to 1, for example, 90% to 100%.
- the preset matching threshold may be 95%.
- the terminal device may adopt a sum of squares based on the pixel difference
- the matching algorithm may also adopt a least squares image matching algorithm.
- other matching algorithms may also be used, and the embodiments of the present invention are not specifically limited herein.
- the terminal device when the terminal device determines the target area in the preview screen presented by the display interface of the terminal device, the terminal device may be implemented as follows:
- the location information is a coordinate range of at least one adjacent region selected by the user in the N regions into which the preview screen is divided, and the N is a positive integer greater than or equal to 2.
- the nine areas in which the preview picture is divided are taken as an example, as shown in FIG.
- the preview screen presented by the display interface of the terminal device is divided into 9 regions with the labels of 1-9 in sequence, and the user can select one of the regions according to requirements, or select at least two adjacent regions.
- the user can select one of the regions according to requirements, or select at least two adjacent regions.
- three areas numbered 7, 8, and 9 can be selected as the target area.
- the location information may also be a coordinate range of the closed area that the user slides out through the preview image presented by the user on the smart terminal display interface.
- FIG. 4 another schematic diagram of a method for determining a target area according to an embodiment of the present invention is provided.
- a user slides a closed area A with a finger on a preview screen displayed on the display interface of the smart terminal, and sets a location.
- the closed area A is the target area.
- the method may also be implemented as follows:
- the feature information of the target object is extracted, and the feature information of the image in the target area preset by the user is determined to include the feature information of the target object, thereby determining whether the target object is in the Within the target area.
- the identification method based on position error value and area error value is adopted, and the target object feature recognition method is more effective in determining whether the target object is in the target area, and the accuracy is high.
- an embodiment of the present invention provides an image capturing apparatus 10, where the image capturing apparatus 10 is applied to a terminal device, and a schematic structural diagram of the apparatus is as shown in FIG.
- the obtaining module 11 is configured to acquire an image including a target object to be photographed.
- a feature extraction module 12 configured to extract feature information of the object to be photographed in the image that includes the object to be photographed acquired by the acquiring module;
- the area setting module 13 is configured to determine a target area in a preview screen presented by the display interface of the terminal device;
- the matching module 14 is configured to determine feature information of the image in the target area set by the area setting module, and include feature information of the target object to be photographed extracted by the feature extraction module, and trigger shooting.
- the acquiring module is specifically configured to:
- an image including the target object to be photographed is acquired by the image collector.
- the matching module is specifically configured to:
- the matching module is specifically configured to:
- Determining that the feature information of the image in the target area includes the feature of the target object to be photographed if the feature information of the image in the target area and the feature information of the target object to be photographed are greater than a preset threshold. information.
- the device may further include:
- the receiving module 15 is configured to receive a locale setting instruction sent by the user for setting the target area.
- the area setting module is configured to: set, according to the location information carried in the area setting instruction received by the receiving module, a corresponding area in the preview screen as the target area, where the location information is a user a coordinate range of at least one adjacent one selected among N regions into which the preview screen is divided, wherein N is a positive integer greater than or equal to 2.
- each functional module in each embodiment of the present application may be integrated into one processing. In the device, it can also be physically existed alone, or two or more modules can be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated module can be implemented in the form of hardware. As shown in FIG. 6, the image collector 601, the processor 602, and the memory 603 can be included. Input module 604 and display 605 and gravity acceleration sensor 606 may also be included.
- the entity hardware corresponding to the obtaining module 11, the feature extraction module 12, the area setting module 13, the matching module 14, and the receiving module 15 may be the processor 602.
- the processor 602 can acquire an image or capture an image through the image collector 601.
- the processor 602 is configured to execute the program code stored in the memory 603, specifically for performing the method described in the embodiments corresponding to FIG. 2 and FIG. 3 above.
- the processor 602 can be a central processing unit (English: central processing unit, CPU for short), or a digital processing unit or the like.
- the memory 603 is configured to store a program executed by the processor 602.
- the memory 603 may be a volatile memory (English: volatile memory), such as a random access memory (English: random-access memory, abbreviation: RAM); the memory 603 may also be a non-volatile memory (English: non-volatile memory) , such as read-only memory (English: read-only memory, Abbreviations: ROM), flash memory (English: flash memory), hard disk (English: hard disk drive, abbreviation: HDD) or solid state drive (English: solid-state drive, abbreviation: SSD), or memory 603 can be used for Any other medium that carries or stores the desired program code in the form of an instruction or data structure and that can be accessed by a computer, but is not limited thereto.
- the memory 603 may be a combination of the above memories.
- the image collector 601 can be a camera for acquiring images, so that the processor 602 acquires the image collected by the image collector 601.
- the input module 604 can be configured to receive input digital or character information, and generate key signal inputs related to user settings and function control of the image capture device.
- input module 604 can include touch screen 6041 and other input devices 6042.
- the touch screen 6041 can collect touch operations on or near the user (such as the user's operation on the touch screen 6041 or near the touch screen 6041 using any suitable object such as a finger, a joint, a stylus, etc.), and according to a preset program. Drive the corresponding connecting device.
- the touch screen 6041 can detect a user's touch action on the touch screen 6041, convert the touch action into a touch signal and send the signal to the processor 602, and can receive and execute a command sent by the processor 602; the touch signal is at least Includes contact coordinate information.
- the touch screen 6041 can provide an input interface and an output interface between the device 10 and a user.
- touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
- other input devices 6042 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), a mouse, and the like.
- the gravity acceleration sensor 606 can detect the magnitude of the acceleration in each direction (generally three axes), and the gravity acceleration sensor 606 can also be used to detect the magnitude and direction of the gravity when the terminal is stationary, and can be used to identify the gesture of the mobile phone. (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; in the embodiment of the invention, the gravity acceleration sensor 606 is used to acquire the user's touch action The gravitational acceleration of the touch screen 6041 in the z-axis direction.
- the device 10 may also include a flash lamp or the like, which will not be described herein.
- the specific connection medium between the image collector 601, the processor 602, the memory 603, the input module 604, the display 605, and the gravity acceleration sensor 606 is not limited in the embodiment of the present application.
- the memory 603, the processor 602, the image collector 601, the input module 604, the display 605, and the gravity acceleration sensor 606 are connected by a bus 607 in FIG. 6, and the bus is indicated by a thick line in FIG.
- the manner of connection between the components is merely illustrative and not limited.
- the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 6, but it does not mean that there is only one bus or one type of bus.
- the processor 602 is a control center of the device 10, connecting various portions of the entire device using various interfaces and lines, executing or executing instructions stored in the memory 603 and invoking each of the data execution devices 10 stored in the memory 603.
- the functions and processing data are used to provide overall monitoring of the device 10.
- the startup command for starting the image collector 601 can be sent to the processor 602 through the input module 604, so that the processor 602 starts the image collector 601.
- the processor 602 acquires an image including the target object to be photographed by the image collector 603, and extracts feature information of the target object to be photographed in the image including the target object to be photographed.
- the user can touch the selected area on the touch screen 6041 with a finger, the area being one of the N areas divided by the preview picture presented by the display covered by the touch screen 6041, or at least two adjacent areas.
- the gravity acceleration sensor 606 senses a user's touch operation and converts a signal generated by the touch operation into a locale setting command that sets the target area.
- the processor 602 receives the area setting instruction, and sets the area information carried in the area setting instruction to the corresponding area in the preview screen as the target area. Afterwards, the processor 602 acquires the coordinate position of the pixel point of the target object to be photographed in the preview screen displayed on the display 605, and coordinates the coordinate position of the pixel of the target object to be photographed in the preview image with the coordinates of the target area.
- the range is compared. If it is determined that the coordinate positions of the pixel points of the target object to be photographed are all within the coordinate range of the target area, determining that the feature information of the image in the target area includes the feature information of the target object to be photographed, triggering the photographing and acquiring a piece.
- the captured image can be saved in the terminal device or uploaded to the cloud server.
- the feature information of the target object is extracted, and the feature information of the image in the target area preset by the user is determined to include the feature information of the target object, thereby determining whether the target object is in the Within the target area.
- the identification method based on the position error value and the area error value is adopted, and the target object feature recognition method is more effective in determining whether the target object is in the target area, and the accuracy is high.
- embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种图像拍摄的方法及装置,用以解决现有技术中存在的采用基于位置误差值和面积误差值的识别方法识别目标拍摄对象,导致识别准确率低的问题。所述图像拍摄方法包括:终端设备获取包括待拍摄目标对象的图像,并提取所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息;所述终端设备确定在所述终端设备的显示界面呈现的预览画面内的目标区域;所述终端设备在确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息时,触发拍摄。
Description
本发明涉及图像采集领域,特别涉及一种图像拍摄方法及装置。
当前,具备摄像功能的智能终端应用越来越普及,比如智能手机、平板等。智能终端拍照功能不断提升,运用各种算法,使照片更生动、出色。
现有的智能终端图像拍摄方法主要为:用户启动智能终端的拍照功能,并在智能终端显示界面上呈现的预览画面中选择一个区域作为人脸目标区域;所述智能设备确定采集到的人脸的中心点在所述预览画面中的位置和人脸目标区域中心点的位置的误差值小于第一预设阈值,并且采集到的人脸在所述预览画面中的面积和人脸目标区域的面积的误差值小于第二预设阈值时,触发拍照。
现有技术中基于采集到的人脸的中心点在所述预览画面中的位置和人脸目标区域中心点的位置的误差值,以及采集到的人脸在所述预览画面中的面积和人脸目标区域的面积的误差值来判断人脸对象是否处于人脸目标区域,所识别的人脸可能并不是目标拍摄对象,导致目标拍摄对象识别准确率低。
发明内容
本发明实施例提供一种图像拍摄方法,用以解决现有技术中存在的采用基于位置误差值和面积误差值的识别方法识别目标拍摄对象,导致识别准确率低的问题。
第一方面,本发明实施例提供了一种图像拍摄方法,该方法可以应用于终端设备,包括:
终端设备获取包括待拍摄目标对象的图像,提取所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息;并确定在所述终端设备的显
示界面呈现的预览画面内的目标区域;经过提取所述待拍摄目标对象的特征信息以及确定所述目标区域后,确定所述目标区域中的图像的特征信息是否包括所述待拍摄目标对象的特征信息,在确定所述目标区域中的图像的特征信息是否包括所述待拍摄目标对象的特征信息时,触发拍摄。触发拍摄后会获取一张拍摄的图像,终端设备可以保存在终端设备的存储结构中,还可以上传到云端存储结构中。
具体的,在提取所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息时,可以采用图像识别技术对所述待拍摄目标对象进行特征信息提取,所述图像识别技术可以为统计模式识别、结构模式识别或模糊模式识别,也可以为其他的识别算法,本发明实施例在这里不做具体限定。
本发明实施例基于目标对象特征识别的方式,通过提取目标对象的特征信息,确定用户预设的目标区域内的图像的特征信息中是否包括目标对象的特征信息,从而确定目标对象是否处于所述目标区域内。相比于现有技术中采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
在一种可能的设计中,所述终端设备获取包括待拍摄目标对象的图像时,可以通过如下方式实现:所述终端设备在所述终端设备设置的图像采集器启动后,通过所述图像采集器采集包括所述待拍摄目标对象的图像。
另外,所述终端设备获取包括待拍摄目标对象的图像时,还可以通过如下方式实现:终端设备的图像采集器启动后,可以从终端设备保存图像中的获取包括待拍摄目标对象的图像。
在一种可能的设计中,所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息,包括:
所述终端设备获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置;
所述终端设备若确定所述待拍摄目标对象的像素点的坐标位置均在所述目标区域的坐标范围内,则确定所述目标区域中的图像的特征信息包括所述
待拍摄目标对象的特征信息。
通过比较待拍摄目标对象的像素点在所述预览画面中的坐标位置与所述目标区域的坐标范围,确定所述目标区域中的图像的特征信息是否包括所述待拍摄目标对象的特征信息,相比于现有技术中采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别以及目标对象位置识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
在一种可能的设计中,所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息,包括:
所述终端设备获取所述预览画面内所述目标区域中的图像的特征信息,并将所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息进行匹配;若所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值大于预设阈值,则所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
优选的,所述预设阈值可以取值为95%。
通过将所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息进行匹配,确定所述目标区域中的图像的特征信息是否包括所述待拍摄目标对象的特征信息,相比于现有技术中采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
在一种可能的设计中,所述终端设备在所述终端设备的显示界面呈现的预览画面内确定目标区域,包括:
所述终端设备接收到用户发送的用于设置所述目标区域的区域设置指令,将所述区域设置指令中携带的位置信息在所述预览画面内对应的区域设置为所述目标区域,所述位置信息为用户在将所述预览画面分成的N个区域中选择的相邻的至少一个区域的坐标范围,所述N为大于等于2的正整数。
相比于现有技术中用户只能选择一个区域,本发明实施例还可以选择两个或两个以上的区域作为目标区域,在用户选择目标区域时选择的范围增大,
局限性小。
第二方面,本发明实施例提供了一种图像拍摄装置,所述装置应用于终端设备,包括:
获取模块,用于获取包括待拍摄目标对象的图像;
特征提取模块,用于提取所述获取模块获取的所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息;
区域设置模块,用于确定在所述终端设备的显示界面呈现的预览画面内的目标区域;
匹配模块,用于确定所述区域设置模块设置的所述目标区域中的图像的特征信息包括所述特征提取模块提取的所述待拍摄目标对象的特征信息,并触发拍摄。
在一种可能的设计中,所述获取模块,具体用于:
在所述终端设备设置的图像采集器启动后,通过所述图像采集器采集包括所述待拍摄目标对象的图像。
在一种可能的设计中,所述匹配模块,具体用于:
获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置;
若确定所述待拍摄目标对象的像素点的坐标位置均在所述区域设置模块设置的所述目标区域的坐标范围内,则确定所述目标区域中的图像的特征信息包括所述特征提取模块提取的所述待拍摄目标对象的特征信息。
在一种可能的设计中,所述匹配模块,具体用于:
获取所述预览画面内所述区域设置模块设置的所述目标区域中的图像的特征信息,并将所述目标区域中的图像的特征信息与所述特征提取模块提取的所述待拍摄目标对象的特征信息进行匹配;
若所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值大于预设阈值,则确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
在一种可能的设计中,所述装置还包括:
接收模块,用户接收用户发送的用于设置所述目标区域的区域设置指令;
所述区域设置模块,具体用于:将所述接收模块接收到的所述区域设置指令中携带的位置信息在所述预览画面内对应的区域设置为所述目标区域,所述位置信息为用户在将所述预览画面分成的N个区域中选择的相邻的至少一个区域的坐标范围,所述N为大于等于2的正整数。
本发明实施例基于目标对象特征识别的方式,通过提取目标对象的特征信息,确定用户预设的目标区域内的图像的特征信息中是否包括目标对象的特征信息,从而确定目标对象是否处于所述目标区域内。相比于现有技术中采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
第三方面,本发明实施例还提供了一种图像拍摄的装置,包括:
处理器、存储器、显示器以及图像采集器。
所述显示器用于显示图像。存储器用于存储处理器所需执行的程序代码。图像采集器用于采集图像。处理器用于执行存储器所存储的程序代码,具体用于执行第一方面或第一方面的任一种设计所述的方法。
第四方面,本发明实施例还提供了一种计算机可读存储介质,用于存储为执行上述第一方面、第一方面的任意一种设计的功能所用的计算机软件指令,其包含用于执行上述第一方面、第一方面的任意一种设计的方法所设计的程序。
图1为本发明实施例提供的信息识别过程的示意图;
图2为本发明实施例提供的一种图像拍摄方法的流程图;
图3为本发明实施例提供的一种确定目标区域方法的示意图;
图4为本发明实施例提供的另一种确定目标区域方法的示意图;
图5为本发明实施例提供的一种图像拍摄装置的结构示意图;
图6为本发明实施例提供的终端优选的实现方式的结构示意图。
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
本发明实施例提供一种图像拍摄方法,用以解决现有技术中存在的采用基于位置误差值和面积误差值的识别方法识别目标拍摄对象,导致识别准确率低的问题。其中,方法和装置是基于同一发明构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
本发明实施例可以应用的场景包括但不限于:自拍,取景,拍摄运动物体等。
本发明实施例提供的方案可以应用于终端设备。终端设备具有能够显示图像信息的显示装置,例如显示器、投影仪等,具有输入装置,例如键盘、光标控制装置,例如鼠标、触控板等。所述终端设备还具有图像采集器,用于采集图像或者拍摄图像。终端设备可以是台式计算机、移动式计算机、平板电脑、智能相机等。
为了使得本申请的实施例更容易被理解,下面,首先对本申请的实施例中涉及的一些描述加以说明,这些说明不应视为对本发明所要求的保护范围的限定。
预览画面为终端设备启动所述终端设备设置的图像采集器后,在所述终端设备显示界面上呈现的画面,所述图像采集器可以为所述终端设备的摄像头。
构图为确定并安排拍摄目标对象的位置以产生更加美观的图像的过程。
而构图位置为使图像更加美观的拍摄目标对象在图像中的位置。
图像识别为基于人物或物体的特征信息识别出所述人物或物体的过程。图像识别技术在识别人物或物体时以人物或物体在图像中的主要特征为基础,如字母A有个尖,P有个圈、而Y的中心有个锐角等。在图像识别过程中,必须排除输入的多余信息,提取出关键的特征信息。
如图1所示,图像识别的过程如下:
首先,信息获取。具体可以通过传感器获取信息。获取的信息可以是二维的图象如文字,图象等;可以是一维的波形如声波,心电图,脑电图;也可以是物理量与逻辑值。
其次,预处理。对获取的信息进行预处理时,可以通过以下至少一种处理方式实现:
模拟信号\数字信号(英文:Analog\Digital,简称:A\D)转换处理,二值化处理,图像的平滑处理,图像的增强处理,图像的滤波处理等。
再次,特征选择。对预处理的信息进行特征选择,例如,一幅64x64的图像可以得到4096个数据,特征提取和选择的过程为将获取的原始数据通过变换获得在特征空间最能反映分类本质的特征。
最后,特征分类。分类过程可以通过分类器设计或者分类决策实现。分类器设计的主要功能是通过训练确定判决规则,使按此类判决规则分类时,错误率最低。分类决策则是在特征空间中对被识别对象进行分类。
下面结合附图对本发明优选的实施方式进行详细说明。
参阅图2所示,为本发明实施例提供的图像拍摄方法的流程图,所述方法由智能终端执行,具体可以包括如下:
S101,终端设备获取包括待拍摄目标对象的图像,并提取所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息。
S102,所述终端设备确定在所述终端设备的显示界面呈现的预览画面内的目标区域。
S103,所述终端设备在确定所述目标区域中的图像的特征信息包括所述
待拍摄目标对象的特征信息时,触发拍摄。
具体的,在步骤S101提取所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息时,可以采用图像识别技术对所述待拍摄目标对象进行特征信息提取,所述图像识别技术可以为统计模式识别、结构模式识别或模糊模式识别,也可以为其他的识别算法,本发明实施例在这里不做具体限定。
需要说明的是,步骤S101与S102并没有严格的先后顺序,可以先执行步骤S101,再执行步骤S102,也可以先执行步骤S102,再执行步骤S101。
具体的,在先执行步骤S101和步骤S102之前,所述终端设备启动所述终端设备设置的图像采集器。
本发明实施例基于目标对象特征识别的方式,通过提取目标对象的特征信息,确定用户预设的目标区域内的图像的特征信息中是否包括目标对象的特征信息,从而确定目标对象是否处于所述目标区域内。相比于现有技术中采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
可选地,终端设备获取包括待拍摄目标对象的图像时,可以通过如下任意一种方式实现:
第一种实现方式:
所述终端设备在所述终端设备设置的图像采集器启动后,通过所述图像采集器采集包括所述待拍摄目标对象的图像。
第二种实现方式:
所述终端设备也可以在所述终端设备中存储包括所述待拍摄目标对象图像的数据库中获取包括所述待拍摄目标对象的图像。
需要说明的是,所述数据库可以为本地数据库,也可以为云端数据库,本发明实施例在这里不做具体限定。
可选的,在所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息时,可以通过如下任意一种方式实现:
第一种实现方式:
A1,所述终端设备获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置。
A2,所述终端设备若确定所述待拍摄目标对象的像素点的坐标位置均在所述目标区域的坐标范围内,则确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
所述终端设备获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置时,可以通过如下方式实现:
所述终端设备在所述预览画面的图像中获取图像匹配块,并将所述图像匹配块在所述预览画面中对应的位置确定为所述待拍摄目标对象的坐标位置,所述图像匹配块的特征信息与所述待拍摄目标对象的特征信息的相似度大于预设匹配阈值。预设匹配阈值大于0小于等于1,例如为90%~100%,优选的,所述预设匹配阈值可以取值为95%。
需要说明的是,所述终端设备在所述预览画面图像中获取图像匹配块时,可以采用基于像素差平方和的匹配算法,也可以采用最小二乘图像匹配算法,当然也可以采用其他匹配算法,本发明实施例在这里不做具体限定。
第二种实现方式:
B1,所述终端设备获取所述预览画面内所述目标区域中的图像的特征信息,并将所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息进行匹配;
B2,若所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值大于预设阈值,则所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
预设匹配阈值大于0或者小于等于1,例如为90%~100%,优选的,所述预设匹配阈值可以取值为95%。
需要说明的是,所述终端设备在比较所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值时,可以采用基于像素差平方和
的匹配算法,也可以采用最小二乘图像匹配算法,当然也可以采用其他匹配算法,本发明实施例在这里不做具体限定。
在另一种可能的实现方式中,所述终端设备在所述终端设备的显示界面呈现的预览画面内确定目标区域时,可以通过如下方式实现:
所述终端设备接收到用户发送的用于设置所述目标区域的区域设置指令,将所述区域设置指令中携带的位置信息在所述预览画面内对应的区域设置为所述目标区域,所述位置信息为用户在将所述预览画面分成的N个区域中选择的相邻的至少一个区域的坐标范围,所述N为大于等于2的正整数
例如,以将所述预览画面分成的9个区域为例,如图3所示。具体的,将终端设备的显示界面呈现的预览画面分成标号依次为1-9的9个区域,用户可以根据需求选择其中一个区域,也可以选择相邻的至少两个区域。比如拍摄单人图像时,可以选择标号为5,8的两个区域作为目标区域,拍摄多人图像时,可以选择标号为5、6、8、9的四个区域作为目标区域,拍摄运动的飞机时,可以选择标号为7、8、9的三个区域作为目标区域。
可选的,所述位置信息还可以为用户通过手指在所述智能终端显示界面上呈现的预览画面中滑动出的封闭区域的坐标范围。例如,如图4所示,为本发明实施例提供的另一种确定目标区域方法的示意图,用户在所述智能终端显示界面呈现的预览画面上用手指滑动出一个封闭区域A,并设置所述封闭区域A为目标区域。
可选的,在所述终端设备的显示界面呈现的预览画面内确定目标区域时,也可以通过如下方式实现:
所述终端设备接收到用户用于选择目标区域的选择指令,并将所述选择指令中携带的区域位置信息对应的区域设置为所述目标区域,所述区域位置信息为所述智能终端预先存储的M个优选的构图位置区域的位置信息。
本发明实施例基于目标对象特征识别的方式,通过提取目标对象的特征信息,确定用户预设的目标区域内的图像的特征信息中是否包括目标对象的特征信息,从而确定目标对象是否处于所述目标区域内。相比于现有技术中
采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
基于与图2对应的方法实施例的同一发明构思,本发明实施例提供一种图像拍摄装置10,所述图像拍摄装置10应用于终端设备,该装置的结构示意图如图5所示,包括获取模块11、特征提取模块12、区域设置模块13和匹配模块14,其中:
获取模块11,用于获取包括待拍摄目标对象的图像。
特征提取模块12,用于提取所述获取模块获取的所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息;
区域设置模块13,用于确定在所述终端设备的显示界面呈现的预览画面内的目标区域;
匹配模块14,用于确定所述区域设置模块设置的所述目标区域中的图像的特征信息包括所述特征提取模块提取的所述待拍摄目标对象的特征信息,并触发拍摄。
可选的,所述获取模块,具体用于:
在所述终端设备设置的图像采集器启动后,通过所述图像采集器采集包括所述待拍摄目标对象的图像。
可选的,所述匹配模块,具体用于:
获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置;
若确定所述待拍摄目标对象的像素点的坐标位置均在所述区域设置模块设置的所述目标区域的坐标范围内,则确定所述目标区域中的图像的特征信息包括所述特征提取模块提取的所述待拍摄目标对象的特征信息。
可选的,所述匹配模块,具体用于:
获取所述预览画面内所述区域设置模块设置的所述目标区域中的图像的特征信息,并将所述目标区域中的图像的特征信息与所述特征提取模块提取的所述待拍摄目标对象的特征信息进行匹配;
若所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值大于预设阈值,则确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
可选的,所述装置还可以包括:
接收模块15,用于接收用户发送的用于设置所述目标区域的区域设置指令。
所述区域设置模块,具体用于:将所述接收模块接收到的所述区域设置指令中携带的位置信息在所述预览画面内对应的区域设置为所述目标区域,所述位置信息为用户在将所述预览画面分成的N个区域中选择的相邻的至少一个区域的坐标范围,所述N为大于等于2的正整数。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
其中,集成的模块既可以采用硬件的形式实现时,如图6所示,可以包括图像采集器601,处理器602、存储器603。还可以包括输入模块604以及显示器605以及重力加速度传感器606。获取模块11、特征提取模块12、区域设置模块13、匹配模块14和接收模块15对应的实体硬件可以是处理器602。所述处理器602可以通过图像采集器601采集图像或拍摄图像。
处理器602用于执行存储器603存储的程序代码,具体用于执行上述图2以及图3对应的实施例所述的方法。处理器602,可以是一个中央处理单元(英文:central processing unit,简称CPU),或者为数字处理单元等等。
所述存储器603,用于存储处理器602执行的程序。存储器603可以是易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);存储器603也可以是非易失性存储器(英文:non-volatile memory),例如只读存储器(英文:read-only memory,
缩写:ROM),快闪存储器(英文:flash memory),硬盘(英文:hard disk drive,缩写:HDD)或固态硬盘(英文:solid-state drive,缩写:SSD)、或者存储器603是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器603可以是上述存储器的组合。
所述图像采集器601可以为摄像头,用于采集图像,从而处理器602获取到所述图像采集器601采集到的图像。
所述输入模块604,可用于接收输入的数字或字符信息,以及产生与所述图像拍摄装置的用户设置以及功能控制有关的键信号输入。具体地,输入模块604可包括触摸屏6041以及其他输入设备6042。所述触摸屏6041可收集用户在其上或附近的触摸操作(比如用户使用手指、关节、触笔等任何适合的物体在触摸屏6041上或在触摸屏6041附近的操作),并根据预先设定的程序驱动相应的连接装置。触摸屏6041可以检测用户对触摸屏6041的触摸动作,将所述触摸动作转换为触摸信号发送给所述处理器602,并能接收所述处理器602发来的命令并加以执行;所述触摸信号至少包括触点坐标信息。所述触摸屏6041可以提供所述装置10和用户之间的输入界面和输出界面。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触摸屏。具体地,其他输入设备6042可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、鼠标等中的一种或多种。所述重力加速度传感器606,可检测各个方向上(一般为三轴)加速度的大小,同时,所述重力加速度传感器606还可用于检测终端静止时重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;在本发明实施例中,所述重力加速度传感器606用于获取用户的触摸动作接触触摸屏6041在z轴方向的重力加速度。
尽管未示出,装置10还可以包括闪光灯等,在此不再赘述。
本申请实施例中不限定上述图像采集器601、处理器602、存储器603、输入模块604、显示器605以及重力加速度传感器606之间的具体连接介质。
本申请实施例在图6中以存储器603、处理器602、图像采集器601、输入模块604、显示器605以及重力加速度传感器606之间通过总线607连接,总线在图6中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
所述处理器602是装置10的控制中心,利用各种接口和线路连接整个装置的各个部分,通过运行或执行存储在存储器603内的指令以及调用存储在存储器603内的数据执行装置10的各种功能和处理数据,从而对所述装置10进行整体监控。
具体的,用于可以通过输入模块604向处理器602发送启动图像采集器601的启动指令,从而处理器602启动所述图像采集器601。处理器602通过图像采集器603获取包括待拍摄目标对象的图像,并提取包括待拍摄目标对象的图像中的待拍摄目标对象的特征信息。
然后,用户可以在触摸屏6041上用手指点触选择区域,所述区域为所述触摸屏6041覆盖下的显示器呈现的预览画面分成的N个区域中的一个区域,或至少两个相邻的区域。重力加速度传感器606感应到用户点触操作并将点触操作产生的信号转换为设置目标区域的区域设置指令。处理器602接收到区域设置指令,并将区域设置指令中携带的位置信息在预览画面内对应的区域设置为目标区域。之后,处理器602获取在显示器605显示的预览画面中待拍摄目标对象的像素点在预览画面中的坐标位置,并将待拍摄目标对象的像素点在预览画面中的坐标位置与目标区域的坐标范围进行比较,若确定待拍摄目标对象的像素点的坐标位置均在目标区域的坐标范围内,则确定目标区域中的图像的特征信息包括待拍摄目标对象的特征信息,触发拍照并获取一张拍摄的图像,拍摄的图像可以保存在终端设备中,还可以上传到云端服务器中。具体可以参考图2以及图3对应的实施例所述的实现方式,本发明实施例在此不再赘述。
此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明,并且在不冲突的情况下,本申请中的实施例及实施例中的功能模块可以相互组合。
本发明实施例基于目标对象特征识别的方式,通过提取目标对象的特征信息,确定用户预设的目标区域内的图像的特征信息中是否包括目标对象的特征信息,从而确定目标对象是否处于所述目标区域内。相比于现有技术中采用基于位置误差值和面积误差值的识别方法,采用目标对象特征识别的方式在确定目标对象是否处于目标区域时更有效,准确率高。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的
处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。
Claims (10)
- 一种图像拍摄方法,其特征在于,包括:终端设备获取包括待拍摄目标对象的图像,并提取所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息;所述终端设备确定在所述终端设备的显示界面呈现的预览画面内的目标区域;所述终端设备在确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息时,触发拍摄。
- 如权利要求1所述的方法,其特征在于,所述终端设备获取包括待拍摄目标对象的图像,包括:所述终端设备在所述终端设备设置的图像采集器启动后,通过所述图像采集器采集包括所述待拍摄目标对象的图像。
- 如权利要求1或2所述的方法,其特征在于,所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息,包括:所述终端设备获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置;所述终端设备若确定所述待拍摄目标对象的像素点的坐标位置均在所述目标区域的坐标范围内,则确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
- 如权利要求1或2所述的方法,其特征在于,所述终端设备确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息,包括:所述终端设备获取所述预览画面内所述目标区域中的图像的特征信息,并将所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息进行匹配;若所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值大于预设阈值,则所述终端设备确定所述目标区域中的图像的特征 信息包括所述待拍摄目标对象的特征信息。
- 如权利要求1至4任一项所述的方法,其特征在于,所述终端设备在所述终端设备的显示界面呈现的预览画面内确定目标区域,包括:所述终端设备接收到用户发送的用于设置所述目标区域的区域设置指令,将所述区域设置指令中携带的位置信息在所述预览画面内对应的区域设置为所述目标区域,所述位置信息为用户在将所述预览画面分成的N个区域中选择的相邻的至少一个区域的坐标范围,所述N为大于等于2的正整数。
- 一种图像拍摄装置,其特征在于,所述装置应用于终端设备,包括:获取模块,用于获取包括待拍摄目标对象的图像;特征提取模块,用于提取所述获取模块获取的所述包括待拍摄目标对象的图像中的所述待拍摄目标对象的特征信息;区域设置模块,用于确定在所述终端设备的显示界面呈现的预览画面内的目标区域;匹配模块,用于确定所述区域设置模块设置的所述目标区域中的图像的特征信息包括所述特征提取模块提取的所述待拍摄目标对象的特征信息,并触发拍摄。
- 如权利要求6所述的装置,其特征在于,所述获取模块,具体用于:在所述终端设备设置的图像采集器启动后,通过所述图像采集器采集包括所述待拍摄目标对象的图像。
- 如权利要求6或7所述的装置,其特征在于,所述匹配模块,具体用于:获取所述预览画面中所述待拍摄目标对象的像素点在所述预览画面中的坐标位置;若确定所述待拍摄目标对象的像素点的坐标位置均在所述区域设置模块设置的所述目标区域的坐标范围内,则确定所述目标区域中的图像的特征信息包括所述特征提取模块提取的所述待拍摄目标对象的特征信息。
- 如权利要求6或7所述的装置,其特征在于,所述匹配模块,具体用 于:获取所述预览画面内所述区域设置模块设置的所述目标区域中的图像的特征信息,并将所述目标区域中的图像的特征信息与所述特征提取模块提取的所述待拍摄目标对象的特征信息进行匹配;若所述目标区域中的图像的特征信息与所述待拍摄目标对象的特征信息相似度值大于预设阈值,则确定所述目标区域中的图像的特征信息包括所述待拍摄目标对象的特征信息。
- 如权利要求6至9任一项所述的装置,其特征在于,所述装置还包括:接收模块,用户接收用户发送的用于设置所述目标区域的区域设置指令;所述区域设置模块,具体用于:将所述接收模块接收到的所述区域设置指令中携带的位置信息在所述预览画面内对应的区域设置为所述目标区域,所述位置信息为用户在将所述预览画面分成的N个区域中选择的相邻的至少一个区域的坐标范围,所述N为大于等于2的正整数。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201680077735.1A CN108781252B (zh) | 2016-10-25 | 2016-10-25 | 一种图像拍摄方法及装置 |
EP16920066.4A EP3518522B1 (en) | 2016-10-25 | 2016-10-25 | Image capturing method and device |
EP21212391.3A EP4030749B1 (en) | 2016-10-25 | 2016-10-25 | Image photographing method and apparatus |
ES21212391T ES2974080T3 (es) | 2016-10-25 | 2016-10-25 | Procedimiento y aparato para fotografiar imágenes |
PCT/CN2016/103296 WO2018076182A1 (zh) | 2016-10-25 | 2016-10-25 | 一种图像拍摄方法及装置 |
HK18116141.4A HK1257020A1 (zh) | 2016-10-25 | 2018-12-17 | 一種圖像拍攝方法及裝置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/103296 WO2018076182A1 (zh) | 2016-10-25 | 2016-10-25 | 一种图像拍摄方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018076182A1 true WO2018076182A1 (zh) | 2018-05-03 |
Family
ID=62024198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/103296 WO2018076182A1 (zh) | 2016-10-25 | 2016-10-25 | 一种图像拍摄方法及装置 |
Country Status (5)
Country | Link |
---|---|
EP (2) | EP3518522B1 (zh) |
CN (1) | CN108781252B (zh) |
ES (1) | ES2974080T3 (zh) |
HK (1) | HK1257020A1 (zh) |
WO (1) | WO2018076182A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110717452A (zh) * | 2019-10-09 | 2020-01-21 | Oppo广东移动通信有限公司 | 图像识别方法、装置、终端及计算机可读存储介质 |
CN111540020A (zh) * | 2020-04-28 | 2020-08-14 | 浙江大华技术股份有限公司 | 目标行为的确定方法及装置、存储介质、电子装置 |
CN111812545A (zh) * | 2020-07-07 | 2020-10-23 | 苏州精濑光电有限公司 | 线路缺陷检测方法、装置、设备及介质 |
US20210117647A1 (en) * | 2018-07-13 | 2021-04-22 | SZ DJI Technology Co., Ltd. | Methods and apparatuses for wave recognition, computer-readable storage media, and unmanned aerial vehicles |
CN113362090A (zh) * | 2020-03-03 | 2021-09-07 | 北京沃东天骏信息技术有限公司 | 一种用户行为数据处理方法和装置 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112843733A (zh) * | 2020-12-31 | 2021-05-28 | 上海米哈游天命科技有限公司 | 拍摄图像的方法、装置、电子设备及存储介质 |
CN113299073B (zh) * | 2021-04-28 | 2023-05-23 | 北京百度网讯科技有限公司 | 识别车辆违章停车的方法、装置、设备和存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201337641A (zh) * | 2012-03-15 | 2013-09-16 | Fih Hong Kong Ltd | 自拍提示系統及方法 |
CN104333689A (zh) * | 2014-03-05 | 2015-02-04 | 广州三星通信技术研究有限公司 | 在拍摄时对预览图像进行显示的方法和装置 |
CN104883497A (zh) * | 2015-04-30 | 2015-09-02 | 广东欧珀移动通信有限公司 | 一种定位拍摄方法及移动终端 |
US20150334293A1 (en) * | 2014-05-16 | 2015-11-19 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
CN105592267A (zh) * | 2015-12-24 | 2016-05-18 | 广东欧珀移动通信有限公司 | 拍照控制方法、拍照控制装置及拍照系统 |
CN105827979A (zh) * | 2016-04-29 | 2016-08-03 | 维沃移动通信有限公司 | 一种拍摄提示的方法和移动终端 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5056061B2 (ja) * | 2007-02-22 | 2012-10-24 | 株式会社ニコン | 撮像装置 |
JP4492697B2 (ja) * | 2007-12-28 | 2010-06-30 | カシオ計算機株式会社 | 撮像装置、及び、プログラム |
WO2011014420A1 (en) * | 2009-07-31 | 2011-02-03 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional (3d) images |
JP5577793B2 (ja) * | 2010-03-30 | 2014-08-27 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
CN102970441B (zh) * | 2012-12-03 | 2014-12-03 | 广东欧珀移动通信有限公司 | 一种基于手机后置摄像头的自拍方法及其手机 |
CN103905710A (zh) * | 2012-12-25 | 2014-07-02 | 夏普株式会社 | 图像捕获方法及其移动终端和设备 |
US20140204263A1 (en) * | 2013-01-22 | 2014-07-24 | Htc Corporation | Image capture methods and systems |
CN104253938A (zh) * | 2013-06-26 | 2014-12-31 | 中兴通讯股份有限公司 | 终端及其智能拍照的方法 |
CN103491307B (zh) * | 2013-10-07 | 2018-12-11 | 厦门美图网科技有限公司 | 一种后置摄像头的智能自拍方法 |
CN105306801A (zh) * | 2014-06-09 | 2016-02-03 | 中兴通讯股份有限公司 | 一种拍摄方法、装置及终端 |
CN105592261A (zh) * | 2014-11-04 | 2016-05-18 | 深圳富泰宏精密工业有限公司 | 辅助拍照系统及方法 |
JP6447955B2 (ja) * | 2014-12-09 | 2019-01-09 | 富士ゼロックス株式会社 | 画像処理装置およびプログラム |
CN105744165A (zh) * | 2016-02-25 | 2016-07-06 | 深圳天珑无线科技有限公司 | 拍照方法、装置及终端 |
-
2016
- 2016-10-25 CN CN201680077735.1A patent/CN108781252B/zh active Active
- 2016-10-25 EP EP16920066.4A patent/EP3518522B1/en active Active
- 2016-10-25 EP EP21212391.3A patent/EP4030749B1/en active Active
- 2016-10-25 WO PCT/CN2016/103296 patent/WO2018076182A1/zh unknown
- 2016-10-25 ES ES21212391T patent/ES2974080T3/es active Active
-
2018
- 2018-12-17 HK HK18116141.4A patent/HK1257020A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201337641A (zh) * | 2012-03-15 | 2013-09-16 | Fih Hong Kong Ltd | 自拍提示系統及方法 |
CN104333689A (zh) * | 2014-03-05 | 2015-02-04 | 广州三星通信技术研究有限公司 | 在拍摄时对预览图像进行显示的方法和装置 |
US20150334293A1 (en) * | 2014-05-16 | 2015-11-19 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
CN104883497A (zh) * | 2015-04-30 | 2015-09-02 | 广东欧珀移动通信有限公司 | 一种定位拍摄方法及移动终端 |
CN105592267A (zh) * | 2015-12-24 | 2016-05-18 | 广东欧珀移动通信有限公司 | 拍照控制方法、拍照控制装置及拍照系统 |
CN105827979A (zh) * | 2016-04-29 | 2016-08-03 | 维沃移动通信有限公司 | 一种拍摄提示的方法和移动终端 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3518522A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210117647A1 (en) * | 2018-07-13 | 2021-04-22 | SZ DJI Technology Co., Ltd. | Methods and apparatuses for wave recognition, computer-readable storage media, and unmanned aerial vehicles |
CN110717452A (zh) * | 2019-10-09 | 2020-01-21 | Oppo广东移动通信有限公司 | 图像识别方法、装置、终端及计算机可读存储介质 |
CN110717452B (zh) * | 2019-10-09 | 2022-04-19 | Oppo广东移动通信有限公司 | 图像识别方法、装置、终端及计算机可读存储介质 |
CN113362090A (zh) * | 2020-03-03 | 2021-09-07 | 北京沃东天骏信息技术有限公司 | 一种用户行为数据处理方法和装置 |
CN111540020A (zh) * | 2020-04-28 | 2020-08-14 | 浙江大华技术股份有限公司 | 目标行为的确定方法及装置、存储介质、电子装置 |
CN111540020B (zh) * | 2020-04-28 | 2023-10-10 | 浙江大华技术股份有限公司 | 目标行为的确定方法及装置、存储介质、电子装置 |
CN111812545A (zh) * | 2020-07-07 | 2020-10-23 | 苏州精濑光电有限公司 | 线路缺陷检测方法、装置、设备及介质 |
CN111812545B (zh) * | 2020-07-07 | 2023-05-12 | 苏州精濑光电有限公司 | 线路缺陷检测方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3518522B1 (en) | 2022-01-26 |
ES2974080T3 (es) | 2024-06-25 |
HK1257020A1 (zh) | 2019-10-11 |
EP4030749A1 (en) | 2022-07-20 |
EP4030749B1 (en) | 2024-01-17 |
CN108781252B (zh) | 2021-06-15 |
EP3518522A4 (en) | 2019-09-18 |
CN108781252A (zh) | 2018-11-09 |
EP3518522A1 (en) | 2019-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018076182A1 (zh) | 一种图像拍摄方法及装置 | |
US10198823B1 (en) | Segmentation of object image data from background image data | |
CN108875522B (zh) | 人脸聚类方法、装置和系统及存储介质 | |
US8773502B2 (en) | Smart targets facilitating the capture of contiguous images | |
WO2019001152A1 (zh) | 拍照方法及移动终端 | |
CN110688914A (zh) | 一种手势识别的方法、智能设备、存储介质和电子设备 | |
US8897490B2 (en) | Vision-based user interface and related method | |
TWI695311B (zh) | 一種利用手勢模擬滑鼠操作的方法、裝置及終端 | |
CN108875481B (zh) | 用于行人检测的方法、装置、系统及存储介质 | |
CN104106078B (zh) | 光学字符辨识(ocr)高速缓冲存储器更新 | |
CN112991555B (zh) | 数据展示方法、装置、设备以及存储介质 | |
CN106919326A (zh) | 一种图片搜索方法及装置 | |
WO2016006090A1 (ja) | 電子機器、方法及びプログラム | |
CN109711287B (zh) | 人脸采集方法及相关产品 | |
US20160140762A1 (en) | Image processing device and image processing method | |
CN105022579A (zh) | 基于图像处理的虚拟键盘的实现方法和装置 | |
TWI609314B (zh) | 介面操作控制系統及其方法 | |
CN110334576B (zh) | 一种手部追踪方法及装置 | |
CN109547678B (zh) | 一种处理方法、装置、设备及可读存储介质 | |
CN111258413A (zh) | 虚拟对象的控制方法和装置 | |
CN106408560B (zh) | 一种快速获取有效图像的方法和装置 | |
Meng et al. | Building smart cameras on mobile tablets for hand gesture recognition | |
CN111079662A (zh) | 一种人物识别方法、装置、机器可读介质及设备 | |
CN113838118B (zh) | 距离测量方法、装置以及电子设备 | |
CN109565541B (zh) | 促进捕捉数字图像的方法、设备和系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16920066 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2016920066 Country of ref document: EP Effective date: 20190425 |