WO2019179357A1 - 拍摄方法、装置、智能设备及存储介质 - Google Patents

拍摄方法、装置、智能设备及存储介质 Download PDF

Info

Publication number
WO2019179357A1
WO2019179357A1 PCT/CN2019/078258 CN2019078258W WO2019179357A1 WO 2019179357 A1 WO2019179357 A1 WO 2019179357A1 CN 2019078258 W CN2019078258 W CN 2019078258W WO 2019179357 A1 WO2019179357 A1 WO 2019179357A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart device
imaging
photographing
preset
area
Prior art date
Application number
PCT/CN2019/078258
Other languages
English (en)
French (fr)
Inventor
高宝岚
王雪松
马健
Original Assignee
北京猎户星空科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京猎户星空科技有限公司 filed Critical 北京猎户星空科技有限公司
Publication of WO2019179357A1 publication Critical patent/WO2019179357A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present disclosure relates to the field of artificial intelligence technologies, and in particular, to a photographing method, apparatus, smart device, and storage medium.
  • the user needs to repeatedly adjust the shooting position and shoot by manually triggering the shooting function of the robot.
  • a first object of the present disclosure is to provide a photographing method for controlling a smart device to guide an imaged object into a target photographing position after intelligently determining an imaged object and a target photographing position, and photographing the imaged object, so that The user does not need to manually adjust the shooting position, solves the problem that the user needs to repeatedly adjust the shooting position, the manual shooting operation is cumbersome and the imaging effect is uncontrollable, and the intelligent selection of the optimal shooting position is performed to image the imaging object, thereby improving the imaging effect and being simple. Efficient and enhance the user experience.
  • a second object of the present disclosure is to propose a photographing apparatus.
  • a third object of the present disclosure is to propose a smart device.
  • a fourth object of the present disclosure is to propose a non-transitory computer readable storage medium.
  • a fifth object of the present disclosure is to propose a computer program product.
  • the first aspect of the present disclosure provides a photographing method, including:
  • the smart device is controlled to photograph the imaged object.
  • the first screen in the field of view of the smart device is acquired, the first screen is subjected to focus recognition, the imaged object is determined, and then the second screen covering the surrounding environment of the smart device is acquired, and the second screen is identified from the second screen.
  • the target shooting position is further controlled to further control the smart device to guide the imaging object into the target shooting position, and control the smart device to shoot the imaging object. Therefore, after intelligently determining the imaging object and the target shooting position, the control smart device guides the imaging object into the target shooting position to further capture the imaging object, so that the user does not need to manually adjust the shooting position, thereby solving the cumbersome manual shooting operation.
  • the problem is to realize the intelligent selection of the best shooting position to shoot the imaged object, improve the imaging effect, simple and efficient, and the shooting mode is flexible, which improves the user experience.
  • the photographing method according to the above embodiment of the present disclosure may further have the following additional technical features:
  • the identifying the target shooting position from the second picture comprises: extracting image features of each pixel in the second picture; and identifying the image according to image features of each pixel Determining at least one first pixel point that satisfies a preset image feature condition; determining first position information of the first pixel point in an environment according to position information of the first pixel point in the second picture, The first position information is taken as the target shooting position.
  • the identifying a target shooting location from the second screen includes: identifying, from the second screen, a target area where no obstruction exists; wherein an area of the target area is greater than or equal to a preset An area threshold; determining second location information of the target area in the environment according to location information of each pixel in the target area in the target area; and shooting the second location information as the target position.
  • the identifying a target shooting location from the second screen includes: identifying, from the second screen, a target area where no obstruction exists; wherein an area of the target area is greater than or equal to a preset An area threshold; extracting an image feature of each pixel in the target area; and identifying, according to an image feature of each pixel, at least one first pixel that satisfies a preset image feature condition; Determining, by the position information of the first pixel in the second picture, first position information of the first pixel in the environment; and using the first position information as the target shooting position.
  • controlling the smart device to guide the photographic object into the target shooting position comprises: determining a positional relationship between the smart device and the target shooting position; wherein the position relationship comprises Controlling at least one of a spatial distance between the smart device and the target photographing location, an angle between the smart device and the target photographing location; controlling the smart device to the target according to the positional relationship The shooting position moves; a follow instruction is issued to the imaging object to guide the imaging object into the target shooting position.
  • the controlling the smart device to capture the imaging object comprises: acquiring a viewfinder image collected by the smart device; identifying a relative position of an imaging region of the imaging object in the viewfinder image, and identifying the location And a spatial distance between the imaging object and the smart device; when determining that the viewfinder screen meets a preset composition condition according to the relative position and the spatial distance, controlling the smart device to perform shooting.
  • the method further comprises: when the relative position is not within the preset range, driving the smart device according to the relative position Rotating at least one of the chassis and the pan/tilt to make the imaging area of the imaging object within a preset range of the framing picture; wherein the preset range includes within the framing frame, or within the framing frame, or An overlap region between the view frame and the composition frame, or the finder frame and the framing frame coverage area; wherein the framing frame is configured to indicate that the preset framing condition is met in the framing picture The relative position of the indication.
  • the driving the at least one of the chassis and the pan/tilt of the smart device comprises: if the imaging region of the imaging object exceeds the preset range by the first offset in the framing picture, Driving the pan-tilt rotation according to the first offset; if the imaging region of the imaging object exceeds the preset range by the second offset in the framing picture, according to the second offset And driving the chassis to rotate; wherein the second offset is greater than the first offset.
  • the method further includes: when acquiring a photographing instruction, according to Determining, according to the relative position and the spatial distance, whether the framing picture meets a preset composition condition; if it is determined that the relative position does not meet the preset composition condition, the imaging area according to the imaging object is relative to the framing frame Offset, driving at least one of the chassis and the pan/tilt of the smart device until the imaging area of the imaging object is in the composition frame; if it is determined that the spatial distance does not meet the preset composition condition, The prompt information is outputted and continues to identify the spatial distance until the spatial distance belongs to a spatial distance range indicated by the preset composition condition.
  • the acquiring the photographing instruction comprises: generating the photographing instruction in the case that the imaging object is in a static state according to the similarity between the preset number of preset view images.
  • the relative position indicated by the preset composition condition includes: an imaging area of the imaging object is at a center of a lateral direction of the finder frame; and an imaging area of the imaging object is not lower than a vertical direction of the framing frame The preset height.
  • the identifying a spatial distance between the imaging object and the smart device includes: according to a proportional relationship between a height of the imaging region and an actual height of the imaging object, and a focal length of the image sensor, Determining a spatial distance between the imaging object and the smart device; wherein the image sensor is used by the smart device to collect the viewfinder image; or, according to the depth data collected by the depth camera of the smart device, A spatial distance between the imaged object and the smart device is determined.
  • controlling the smart device to perform shooting includes: controlling the smart device to continuously capture at least two frames of images; and after controlling the smart device to perform shooting, further comprising: according to image quality, from the at least An image for preview display is selected from the two frames of images.
  • the second aspect of the present disclosure provides a photographing apparatus, including:
  • An object recognition module configured to acquire a first picture in a field of view of the smart device, perform focus recognition on the first picture, and determine an imaging object;
  • a location recognition module configured to acquire a second screen that covers a surrounding environment of the smart device, and identify a target shooting location from the second screen;
  • a guiding module configured to control the smart device to guide the imaging object into the target shooting position
  • a shooting module configured to control the smart device to take a picture for the imaging object.
  • the photographing apparatus of the embodiment of the present disclosure acquires a first screen in a field of view of the smart device, performs focus recognition on the first screen, determines an imaged object, and then acquires a second screen covering the surrounding environment of the smart device, and identifies the second screen from the second screen.
  • the target shooting position is further controlled to further control the smart device to guide the imaging object into the target shooting position, and control the smart device to shoot the imaging object. Therefore, after intelligently determining the imaging object and the target shooting position, the control smart device guides the imaging object into the target shooting position to further capture the imaging object, so that the user does not need to manually adjust the shooting position, thereby solving the cumbersome manual shooting operation.
  • the problem is to realize the intelligent selection of the best shooting position to shoot the imaged object, improve the imaging effect, simple and efficient, and the shooting mode is flexible, which improves the user experience.
  • the photographing apparatus may further have the following additional technical features:
  • the location identifying module is configured to: extract image features of each pixel in the second picture; and identify, according to image features of each pixel, that the image features meet a preset image At least one first pixel of the feature condition; determining, according to the location information of the first pixel in the second picture, first location information of the first pixel in the environment, the first location Information is taken as the target shooting position.
  • the location identifying module is configured to: identify, from the second picture, a target area where no obstruction exists; wherein an area of the target area is greater than or equal to a preset area threshold; Position information of each pixel in the target area in the second picture, determining second position information of the target area in the environment; and using the second position information as the target shooting position.
  • the location identification module is configured to: identify, from the second picture, a target area where no occlusion object exists; wherein, an area of the target area is greater than or equal to a preset area threshold; An image feature of each pixel in the target area; identifying, according to the image feature of each pixel, at least one first pixel that satisfies the preset image feature condition; according to the first pixel.
  • the location information in the second picture determines first location information of the first pixel in the environment; and the first location information is used as the target shooting location.
  • the guiding module is specifically configured to: determine a positional relationship between the smart device and the target shooting location; wherein the location relationship includes between the smart device and the target shooting location At least one of a spatial distance, an angle between the smart device and the target photographing position; controlling, according to the positional relationship, the smart device to move to the target photographing position; issuing a follow instruction to the imaged object And guiding the imaging subject into the target photographing position.
  • the photographing module includes: an acquiring unit, configured to acquire a framing picture collected by the smart device; and an identifying unit, configured to identify a relative position of the imaging area of the imaging object in the framing picture, and identify the a spatial distance between the imaging object and the smart device; and a photographing unit, configured to control the smart device to shoot when determining that the viewfinder screen meets a preset composition condition according to the relative position and the spatial distance .
  • the photographing module further includes: a first driving unit, after identifying a relative position of the imaging area of the imaging object in the framing picture, when the relative position is not within the preset range, Driving at least one of a chassis and a pan/tilt of the smart device to rotate, so that an imaging area of the imaging object is within a preset range of the viewfinder image; wherein the preset range includes In the finder frame, or in the framing frame, or an overlapping area between the finder frame and the framing frame, or the framing frame and the framing frame covering area; wherein the framing frame is used to indicate A relative position in the viewfinder screen that matches the preset composition condition indication.
  • the first driving unit is configured to: if the imaging area of the imaging object exceeds the preset range by the first offset in the framing picture, according to the first offset Driving the pan/tilt to rotate; if the image forming area of the image forming object exceeds the preset range by the second offset amount in the framing picture, driving the chassis to rotate according to the second offset amount;
  • the second offset is greater than the first offset.
  • the shooting module further includes: a determining unit, a second driving unit, and a prompting unit; the determining unit, configured to determine, according to the relative position and the spatial distance, when the photographing instruction is acquired When the framing picture is in accordance with the preset composition condition, before controlling the smart device to perform shooting, when the camera instruction is acquired, determining whether the framing picture meets the preset composition condition according to the relative position and the spatial distance; a driving unit, configured to drive at least the chassis and the pan/tilt of the smart device according to the offset of the imaging region of the imaging object relative to the frame if the relative position does not meet the preset composition condition a moving until the imaging area of the imaging object is in the composition frame; the prompting unit is configured to output prompt information if it is determined that the spatial distance does not meet the preset composition condition, and return to the identification unit to continue The spatial distance is identified until the spatial distance belongs to a spatial distance range indicated by the preset composition condition.
  • the shooting module further includes: an instruction generating unit, configured to determine that the imaging object is in a stationary state according to a similarity between a preset number of preset viewing frames In the case of the photographing instruction, the photographing instruction is generated.
  • the relative position indicated by the preset composition condition includes: an imaging area of the imaging object is at a center of a lateral direction of the finder frame; and an imaging area of the imaging object is not lower than a vertical direction of the framing frame The preset height.
  • the identifying unit is configured to: determine, according to a proportional relationship between a height of the imaging area and an actual height of the imaging object, and a focal length of the image sensor, the imaging object and the smart device a spatial distance between the imaging device and the smart device, wherein the image sensor is used by the smart device to capture the viewfinder image; or, according to the depth data collected by the depth camera of the smart device, the imaged object and the smart device are determined The spatial distance between them.
  • the shooting unit is configured to: control the smart device to continuously capture at least two frames of images; and after controlling the smart device to perform shooting, further comprising: selecting, according to image quality, the at least two frames of images Select an image to preview the presentation.
  • a third aspect of the present disclosure provides a smart device, including: a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, implementing the first The photographing method described in the embodiments.
  • the fourth aspect of the present disclosure provides a computer program product, characterized in that the photographing method as described in the first aspect embodiment is implemented when the instruction processor in the computer program product executes.
  • the fifth aspect of the present disclosure provides a non-transitory computer readable storage medium having stored thereon a computer program, wherein the program is executed by the processor to implement the photographing method as described in the first aspect. .
  • FIG. 1 is a schematic flow chart of a photographing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for identifying a target shooting position according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart diagram of another method for identifying a target shooting position according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flow chart of another method for identifying a target shooting position according to an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart diagram of another shooting method according to an embodiment of the present disclosure.
  • Figure 6 is a schematic diagram of the principle of perspective theory
  • FIG. 7 is a schematic flowchart diagram of another shooting method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram 1 of a preset posture in an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram 2 of a preset posture according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of another photographing apparatus according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of another imaging device according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram of a smart device according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic flowchart of a photographing method according to an embodiment of the present disclosure. As shown in FIG. 1 , the method includes:
  • Step 101 Acquire a first picture in a field of view of the smart device, perform focus recognition on the first picture, and determine an imaging object.
  • smart devices include, but are not limited to, smart phones, cameras, tablets, smart robots and the like.
  • an image sensor such as a camera
  • the focus follow function of the smart device is activated by the smart device controller.
  • the first picture in the field of view of the smart device can be acquired by the camera on the smart device. After acquiring the first picture, the first picture may be detected to identify the target entering the monitoring range.
  • the goal here can be understood as human.
  • the smart device can identify the person in the first picture by face detection or human body detection.
  • the contour of the object is extracted from the first screen, and the extracted contour of the object is compared with a pre-existing facial contour or human contour.
  • the similarity between the extracted contour and the preset contour exceeds a preset threshold, it can be considered that the person is recognized from the first screen.
  • extracting a face based on the face detection technique from the first screen After recognizing the human body or the human face, it is determined that there is a target in the field of view, and then the recognized target is taken as an imaging target.
  • Step 102 Acquire a second screen covering the surrounding environment of the smart device, and identify a target shooting location from the second screen.
  • the front and rear cameras can be simultaneously turned on the smart device to perform 360° shooting to obtain a second picture covering the surrounding environment of the smart device.
  • a position with a good light condition can be taken as a target shooting position. Specifically, acquiring a second picture covering the surrounding environment of the smart device, and extracting image features (such as brightness features, color features, texture features, and the like) of each pixel in the second picture, further according to image features of each pixel And identifying a pixel point in the second picture that satisfies the preset condition, and further determining the target shooting position according to the position information of the pixel point in the second picture that satisfies the preset condition.
  • image features such as brightness features, color features, texture features, and the like
  • the open area of the obstacle-free object can be used as the target shooting position.
  • the second picture covering the surrounding environment of the smart device is acquired, and the area where the obstruction is not present is identified from the second picture, and then the target shooting position is determined according to the position information of the target area in the second picture.
  • the lighting condition and the obstacle condition can be comprehensively considered, and the open area with good light conditions is taken as the target shooting position. Specifically, acquiring a second picture covering the surrounding environment of the smart device, and further identifying a target area where no occlusion exists in the second picture, further extracting image features of each pixel in the target area, and further extracting an image according to each pixel And identifying a pixel point that satisfies the preset condition, and further determining the target shooting position according to the position information of the pixel point in the second screen that satisfies the preset condition.
  • Step 103 The control smart device guides the imaging object into the target shooting position.
  • the position of the target shooting position in the second screen can be extracted, and based on the position and the imaging rule, the positional relationship between the target shooting position and the smart device is obtained.
  • the smart device can be controlled to move to the target photographing position to guide the imaging subject to the target photographing position.
  • the control smart device broadcasts the positional relationship to guide the imaging object into the target shooting position, for example, playing the front 45° direction, and the distance 2m is the target shooting position.
  • the spatial distance between the smart device and the target shooting position is determined, and according to the spatial distance, the smart device is controlled to move to the target shooting position, and a follow instruction is further issued to the imaging object to guide the imaging object to enter the target shooting position.
  • the angle between the smart device and the target shooting position is determined, and then the smart device is controlled to move to the target shooting position according to the angle, and a follow instruction is further issued to the imaging object to guide the imaging object to the target shooting position.
  • the form in which the following instruction is issued to the imaging object includes, but is not limited to, a voice command, a text command, and the like.
  • the control smart device after intelligently determining the imaging object and the target shooting position, guides the imaging object into the target shooting position to further capture the imaging object, thereby realizing intelligent selection of the optimal shooting position to capture the imaging object. Improves the imaging effect, simplifies the operation steps of the shooting process, and improves the user experience.
  • Step 104 Control the smart device to take an image for the imaging object.
  • the smart device When the imaged object enters the target shooting position, the smart device can be controlled to shoot the imaged object. As an example, it is possible to recognize in real time whether or not an imaged object has entered the target shooting position from the captured image, and automatically activates the shooting function when it is recognized. As an example, the imaging function can be initiated by a voice command or setting action by the imaging subject.
  • the photographing method of the embodiment of the present disclosure obtains a first screen in a field of view of the smart device, performs focus recognition on the first screen, determines an imaged object, and obtains a second image covering the surrounding environment of the smart device, from the second screen.
  • the target shooting position is recognized, and the smart device is further controlled to guide the imaging object into the target shooting position, and the smart device is controlled to shoot the imaging object. Therefore, after intelligently determining the imaging target and the target shooting position, the control smart device guides the imaging object into the target shooting position to further capture the imaging object, so that the user does not need to manually adjust the shooting position, distance, angle, etc., thereby solving the problem.
  • the traditional manual shooting operation is cumbersome, and the intelligent selection of the optimal shooting position for imaging the object is achieved, the imaging effect is improved, the method is simple and efficient, and the shooting mode is flexible, thereby improving the user experience.
  • FIG. 2 is a schematic flowchart of a method for identifying a target shooting position according to an embodiment of the present disclosure. As shown in FIG. 2, the method includes:
  • Step 201 Extract image features of each pixel in the second picture.
  • image features include, but are not limited to, color features, brightness features, texture features, and the like.
  • Step 202 Identify, according to image features of each pixel point, at least one first pixel point that the image feature satisfies a preset image feature condition.
  • the ambient light conditions are dark during shooting, it is prone to underexposure. If the ambient light conditions are bright, overexposure is likely to occur. Therefore, it is necessary to select a position with a suitable light condition as the shooting position.
  • the first threshold and the second threshold may be preset, wherein the first threshold is less than the second threshold. Further, according to the brightness characteristic of each pixel, a pixel point whose luminance characteristic is greater than or equal to the first threshold and less than or equal to the second threshold is selected as the first pixel.
  • a third threshold may be preset, and then, according to the brightness characteristic of each pixel, a pixel point whose luminance feature is closest to the third threshold is selected as the first pixel.
  • the image with the higher quality can be captured by shooting at the position corresponding to the first pixel in the environment.
  • Step 203 Determine first position information of the first pixel point in the environment according to the position information of the first pixel point in the second picture, and use the first position information as the target shooting position.
  • the location information includes, but is not limited to, coordinate information, distance information, direction information, and the like.
  • the first pixel point that satisfies the preset condition is identified, and according to the position information of the first pixel point, the optimal shooting position of the light is determined.
  • the following describes how to identify the open area of the unobstructed object as the target photographing position.
  • FIG. 3 is a schematic flowchart of another method for identifying a target shooting position according to an embodiment of the present disclosure. As shown in FIG. 3, the method includes:
  • Step 301 Identify, from the second picture, a target area where no obstruction exists; wherein the area of the target area is greater than or equal to a preset area threshold.
  • the second screen may be subjected to grayscale processing by image processing technology, thereby performing obstacle detection on the single-channel grayscale image, thereby identifying a target without an obstruction. region.
  • the area of the target area can be identified.
  • an area threshold can be preset, and the target area whose area is greater than or equal to the area threshold is screened by comparing the area of the target area with the area threshold.
  • the target area where the obstruction does not exist can be identified in the second screen.
  • Step 302 Determine second location information of the target area in the environment according to location information of each pixel in the target area in the second picture.
  • the position information of each pixel in the target area in the second picture by using the imaging principle, the position information of the actual position of each pixel in the environment can be determined, and the actual position information is combined.
  • the second location information of the target area in the environment can be obtained.
  • the location information includes, but is not limited to, coordinate information, distance information, direction information, and the like.
  • step 203 is also applicable to the step 302 in this embodiment, and details are not described herein again.
  • step 303 the second location information is used as the target shooting location.
  • the target photographing position is determined based on the second position information to further guide the imaging subject into the target photographing position.
  • the second position information determines the best shooting position in which no obstruction exists.
  • FIG. 4 is a schematic flowchart of another method for identifying a target shooting position according to an embodiment of the present disclosure. As shown in FIG. 4, the method includes:
  • Step 401 Identify, from the second picture, a target area where no obstruction exists; wherein the area of the target area is greater than or equal to a preset area threshold.
  • step 301 is also applicable to the step 401, and details are not described herein again.
  • Step 402 extract image features of each pixel in the target area.
  • image features include, but are not limited to, color features, brightness features, texture features, and the like.
  • Step 403 Identify, according to the image feature of each pixel point, at least one first pixel point that the image feature satisfies a preset image feature condition.
  • step 202 is also applicable to the step 403, and details are not described herein again.
  • Step 404 Determine first position information of the first pixel point in the environment according to the location information of the first pixel in the second picture.
  • the first pixel point that satisfies the preset image feature condition is selected according to the image feature of each pixel point in the target area, and the first The corresponding first position information of the pixel in the environment is an open area with good light conditions and no obstruction.
  • step 405 the first location information is used as the target shooting location.
  • the first location information of the first pixel point in the environment may also be identified and analyzed, and the first location information closest to the smart device is used as the most Good shooting location.
  • the target shooting position enables intelligent selection of the best shooting position with good light and no obstructions.
  • the smart device controller in order to provide a better imaging effect to the imaged object, it is also possible to intelligently compose the imaged object by the smart device controller. Specifically, according to the relative position of the imaging object in the framing picture and the spatial distance from the smart device, it is automatically determined whether the framing picture conforms to the preset composition condition, and the smart device is controlled to shoot only when the preset composition condition is met. It can effectively guarantee the image quality and enhance the imaging effect.
  • FIG. 5 is a schematic flowchart of another shooting method according to an embodiment of the present disclosure.
  • the shooting method includes:
  • Step 501 Acquire a view screen collected by the smart device.
  • the framing picture can be acquired by the image sensor in the smart device.
  • the image sensor may be a visible light image sensor, or the image sensor may include a visible light image sensor and a structured light image sensor.
  • the visible light image sensor images the visible light reflected by the imaging object to obtain a visible light image;
  • the structured light image sensor can image the structured light reflected by the imaging object to obtain a structured light image.
  • the viewfinder image may be collected by the image sensor in the smart device, and then the image sensor may send the collected viewfinder image to the smart device controller, and accordingly, the smart device controller may Get the viewfinder screen.
  • Step 502 identifying a relative position of the imaging area of the imaging object in the framing picture, and identifying a spatial distance between the imaging object and the smart device.
  • an image feature of the imaging region of the imaging object in the framing picture may be identified, and then the recognized image feature is input to the pre-trained image feature recognition model to determine the relative position of the imaging region in the framing picture.
  • the image feature recognition model is pre-trained.
  • the sample image may be selected, and then each object in the sample image is labeled based on the image feature of the sample image, and the image feature recognition model is trained by using the labeled sample image.
  • the imaged object is identified by the trained model, and in the framing picture, if the imaged object is recognized, the relative position of the imaged area of the imaged object in the framing picture is determined.
  • the image feature of the imaging region of the imaging object in the framing picture can be identified, and then the recognized image feature is input to the pre-trained image feature recognition model, and the imaging region can be determined in the framing picture. relative position.
  • the spatial distance between the imaging object and the smart device may be determined according to a proportional relationship between the height of the imaging region and the actual height of the imaging object and the focal length of the image sensor.
  • the spatial distance between the imaged object and the smart device can be determined based on the perspective theory.
  • Figure 6 is a schematic diagram of the principle of perspective theory. According to similar triangles AOB and COD, you can get:
  • the distance between the film and the lens is the focal length of the image sensor
  • the focal length of the mark is f
  • the actual height of the imaged object is H
  • the height of the imaged area is h
  • the spatial distance between the imaged object and the smart device is d
  • the smart device may include a depth camera, and the depth data corresponding to the imaging object may be acquired by the depth camera, and then the spatial distance between the imaging object and the smart device is determined according to the depth data.
  • Step 503 When determining that the framing picture meets the preset composition condition according to the relative position and the spatial distance, the smart device is controlled to perform shooting.
  • the preset composition condition is preset.
  • the preset composition condition may include: the imaging area of the imaging object is at the center of the lateral direction of the finder frame.
  • the preset composition condition may further include: the imaging area of the imaging object is not lower than a preset height in the longitudinal direction of the finder frame.
  • the preset height is preset, for example, the preset height may be preset for the built-in program of the smart device, or the preset height may be set by the user, for example, the preset height may be 1/3, which is not limited.
  • the preset composition condition may further include: the spatial distance between the imaging object and the smart device is not lower than a preset, and the imaging area of the imaging object in the framing picture is too small, so that the imaging effect is poor. The range of spatial distances.
  • the preset spatial distance range is preset, for example, the preset spatial distance range may be preset by a built-in program of the smart device, or the preset spatial distance range may be set by the user, optionally, marking the pre-
  • the spatial distance is set to [a, b], for example, [a, b] can be [0.5, 3] meters, which is not limited.
  • the smart device when the relative position of the imaging area of the imaging object in the framing picture and the spatial distance between the imaging object and the smart device are recognized, whether the framing picture conforms to the preset may be determined according to the relative position and the spatial distance.
  • the composition condition when it meets the preset composition condition, indicates that the composition quality is better at this time, so the smart device can be controlled to shoot, and when the preset composition condition is not met, it indicates that the composition quality at this time is not optimal.
  • the smart device may not be controlled to perform shooting.
  • the photographing method of the embodiment captures the framing screen acquired by the smart device, and then identifies the relative position of the imaging region of the imaging object in the framing screen and the spatial distance between the imaging object and the smart device, only when according to the relative position and the spatial distance
  • the smart device is controlled to shoot.
  • the user does not need to adjust the station position and confirm whether the preview picture meets the expectations, simplify the operation steps in the photographing process, improve the user experience, and improve the photographing efficiency.
  • the smart device controller automatically determines whether the framing picture conforms to the preset composition condition according to the relative position of the imaging object in the framing picture and the spatial distance from the smart device, and controls the smart only when the preset composition condition is met.
  • the device can shoot and effectively improve the imaging quality and enhance the imaging effect.
  • FIG. 7 is a schematic flowchart of another shooting method according to an embodiment of the present disclosure. As shown in FIG. 7, the shooting method may include The following steps:
  • Step 601 Acquire a viewfinder picture collected by the smart device.
  • Step 602 identifying a relative position of the imaging area of the imaging object in the framing picture.
  • step 603 it is determined whether the relative position is within the preset range. If yes, step 605 is performed; otherwise, step 604 is performed.
  • the relative position of the imaging area of the imaging object in the framing picture needs to be within a preset range.
  • the composition frame is located in the frame, and when the frame is located in the frame, the preset range may be included in the frame or in the frame.
  • the composition frame is used to indicate the relative position of the framing picture that meets the preset composition condition indication.
  • the preset range may further include an overlapping area between the finder frame and the composition frame, or a framing frame and a composition frame coverage area.
  • Step 604 it can be determined whether the relative position is within the preset range, and if it is within the preset range, and the photographing instruction is not obtained, the process returns to step 601, and if it is not within the preset range, whether or not it is acquired Step 604 is triggered by the photo command.
  • Step 604 according to the relative position, driving at least one of the chassis and the pan/tilt of the smart device to rotate, so that the imaging region of the imaging object is within a preset range of the viewfinder screen.
  • the pan/tilt of the smart device can be rotated, so that the imaging area of the imaging object is within the preset range of the viewfinder screen.
  • the chassis of the imaging device can be rotated by the chassis of the driving device so that the imaging area of the imaging object is within the preset range of the framing picture.
  • the pan/tilt is rotated according to the first offset.
  • the chassis rotation is driven according to the second offset.
  • the second offset is greater than the first offset.
  • the first offset and the second offset are both preset, for example, the first offset (or the second offset) may be preset by a built-in program of the smart device, or An offset (or a second offset) can be set by the user, which is not limited.
  • step 605 it is determined whether a photographing instruction is obtained. If yes, step 606 is performed; otherwise, step 601 is performed.
  • the camera function of the robot is manually triggered by the user, that is, the camera function of the robot is passive triggering, for example, the excellent-Kruze robot, the Kangliyou blue-U05 robot, and the like, and the photographing mode is single.
  • the camera function of the smart device when the user is in a static state, the camera function of the smart device can be automatically triggered.
  • the smart device controller can recognize whether the imaging object is in a stationary state, and can automatically generate a photographing instruction when it is determined that the imaging object is in a stationary state.
  • the smart device controller may determine that the imaging object is in a stationary state according to the similarity between the preset number of preset view frames collected recently.
  • the preset number is preset, for example, the preset number may be preset for the built-in program of the smart device, or the preset number may also be set by the user, which is not limited. For example, when the preset number is five, if the similarity of the five or more framing pictures collected recently is high, at this time, it can be determined that the imaging object is in a stationary state.
  • At least one of text and voice prompt information may be generated when the photographing instruction is generated, so as to prompt the user to take a photo preparation, for example, the prompt information may be “I To take a photo, 321 eggplant!”.
  • the smart device controller may identify whether the posture of the imaging object conforms to the preset posture, wherein the preset posture may be set by the user, or the preset posture may be preset for the built-in program of the smart device.
  • the posture of the imaging object may include at least one of a gesture and an expression.
  • the photographing instruction may be generated when it is determined that the posture of the imaging object conforms to a preset posture.
  • the preset gesture may include a gesture made by one hand and a gesture made by both hands.
  • the gesture made by the left hand or the gesture made by the right hand can be performed.
  • the preset posture is "concentric”
  • the imaging object is determined.
  • the posture conforms to the preset posture.
  • the imaged object needs to accurately make a preset gesture.
  • the preset posture is “thank you”, at this time, the imaging object needs to make a fist with the right hand, the left hand is opened, and the palm of the left hand is covered on the right fist.
  • a voice prompt information may also be generated.
  • the prompt information may be “this pose (or expression) is good, 321 eggplant!”.
  • the automatic camera can also be triggered by the user voice, and the smart device controller can generate the photographing instruction according to the user voice collected by the smart device.
  • the voice prompt information may also be generated. For example, when the user is standing, the user may be prompted to “photograph!”.
  • the smart device can be automatically photographed in different manners, and the user's photographing experience is effectively improved on the basis of enriching the photographing manner. If the photographing instruction is obtained, step 606 may be performed. If the photographing instruction is not obtained, the process returns to step 601.
  • step 606 it is determined whether the relative position meets the preset composition condition. If yes, step 608 is performed; otherwise, step 607 is performed.
  • the smart device controller when the smart device controller obtains the photographing instruction, it may be determined whether the relative position meets the preset composition condition. Specifically, it may be determined whether the imaging area of the imaging object is in the center of the horizontal direction of the finder frame, and It is judged whether the imaging area of the imaging subject is not lower than a preset height in the longitudinal direction of the finder frame.
  • the relative position is determined to conform to the preset composition condition only when the imaging area of the imaging subject is at the center of the lateral direction of the finder frame, and the imaging area of the imaging object is not lower than the preset height in the longitudinal direction of the finder frame.
  • Step 607 Drive at least one of the chassis and the pan/tilt of the smart device according to the offset of the imaging region of the imaging object relative to the framing frame until the imaging region of the imaging object is within the framing frame.
  • the pan-tilt of the smart device can be rotated until the imaging area of the imaging subject is within the framing frame.
  • the chassis of the smart device can be rotated until the imaging area of the imaging object is in the composition frame.
  • At least one of the voice and the text prompt information may also be output, and the imaging object moves the body according to the prompt information, so that the imaging area is in the composition. Inside the box.
  • a voice prompt can be made: please take two steps to the right side.
  • you can voice prompt Please take two steps to the left.
  • you can voice prompt lift your head and stand up straight! Or: Please go two steps forward!
  • the smart device controller can continue to recognize the relative position of the imaging area of the imaging object in the framing picture, ie, re-trigger step 606 and subsequent steps.
  • Step 608 Identify a spatial distance between the imaging object and the smart device, and determine whether the spatial distance meets the preset composition condition. If yes, perform steps 610-611; otherwise, perform step 609.
  • step 608 is performed after step 606, but the disclosure is not limited thereto, step 608 may be performed before step 606, or step 608 may be performed in parallel with step 606.
  • the preset composition condition may further include: the spatial distance between the imaging object and the smart device is not low. The preset spatial distance range.
  • step 609 it can be determined whether the spatial distance meets the preset composition condition, that is, whether the spatial distance between the imaging object and the smart device is lower than the spatial distance range indicated by the preset composition condition, and if yes, step 609 is triggered; otherwise, step 610 is performed. .
  • Step 609 output prompt information, and continue to identify the spatial distance until the spatial distance belongs to the spatial distance range indicated by the preset composition condition.
  • At least one of voice and text prompt information may be output.
  • the spatial distance range indicated by the marker preset composition condition is [a, b], and when the distance between the imaging object distance and the smart device is less than a, at this time, it indicates that the imaging object is too close to the smart device,
  • the voice message It is a bit close, and it is better to take a photo back.
  • the voice information can be output: a bit far, please go forward two steps.
  • the smart device controller can continue to recognize the spatial distance, that is, re-trigger step 608 and subsequent steps.
  • Step 610 Control the smart device to continuously capture at least two frames of images.
  • the smart device when it is determined that the view screen conforms to the preset composition condition according to the relative position and the spatial distance, it indicates that the composition quality at this time is better, and thus the smart device can be controlled to perform shooting.
  • the smart device may be controlled to continuously capture at least two frames of images, so that the image with the best image quality may be selected from at least two frames of images for display.
  • Step 611 Select an image for preview display from at least two frames of images according to image quality.
  • the image with the best image quality can be selected from at least two frames of images for display, so that the user can send or download the image with the best image quality, effectively guarantee the imaging quality, ensure the imaging effect, and improve the user's photographing experience.
  • the photographing method of the embodiment captures the framing screen acquired by the smart device, and then identifies the relative position of the imaging region of the imaging object in the framing screen and the spatial distance between the imaging object and the smart device, only when according to the relative position and the spatial distance
  • the smart device is controlled to shoot.
  • the user does not need to adjust the station position and confirm whether the preview picture meets the expectations, simplify the operation steps in the photographing process, improve the user experience, and improve the photographing efficiency.
  • the smart device controller automatically determines whether the framing picture conforms to the preset composition condition according to the relative position of the imaging object in the framing picture and the spatial distance from the smart device, and controls the smart only when the preset composition condition is met.
  • the device can shoot and effectively improve the imaging quality and enhance the imaging effect.
  • FIG. 10 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present disclosure.
  • the photographing apparatus includes: an object recognition module 10 , a position recognition module 20 , a guiding module 30 , and a photographing module 40 .
  • the object recognition module 10 is configured to acquire a first picture in a field of view of the smart device, perform focus recognition on the first picture, and determine an imaging object.
  • the location identification module 20 is configured to acquire a second screen that covers the environment surrounding the smart device, and identify a target shooting location from the second screen.
  • the guiding module 30 is configured to control the smart device to guide the imaging object into the target shooting position.
  • the shooting module 40 is configured to control the smart device to take an image for the imaging object.
  • location identification module 20 is specifically configured to:
  • location identification module 20 is specifically configured to:
  • the second position information is taken as the target shooting position.
  • location identification module 20 is specifically configured to:
  • the first position information is taken as the target shooting position.
  • the guiding module 30 is specifically configured to:
  • Determining a positional relationship between the smart device and the target shooting position Determining a positional relationship between the smart device and the target shooting position; wherein the positional relationship includes at least one of a spatial distance between the smart device and the target shooting position, and an angle between the smart device and the target shooting position;
  • a follow instruction is issued to the imaging subject to guide the imaging subject to the target shooting position.
  • FIG. 11 is a schematic structural diagram of another photographing apparatus according to an embodiment of the present disclosure.
  • the photographing module 40 further includes an acquiring unit 41 , an identifying unit 42 , and a photographing unit 43 .
  • the acquiring unit 41 is configured to acquire a framing picture collected by the smart device.
  • the identifying unit 42 is configured to identify a relative position of the imaging area of the imaging object in the framing picture, and identify a spatial distance between the imaging object and the smart device;
  • the photographing unit 43 is configured to control the smart device to perform photographing when determining that the view screen conforms to the preset composition condition according to the relative position and the spatial distance.
  • the shooting module 40 further includes:
  • the first driving unit 44 is configured to drive the chassis and the pan/tilt of the smart device according to the relative position after the relative position is not within the preset range after identifying the relative position of the imaging area of the imaging object in the framing picture At least one rotation to make the imaging area of the imaging object within a preset range of the framing picture;
  • the preset range includes the inside of the framing frame, or the framing frame, or the overlapping area between the finder frame and the framing frame, or the framing frame and the framing frame covering area; wherein the framing frame is used to indicate that the framing picture matches The relative position indicated by the preset composition condition.
  • first driving unit 44 is specifically configured to:
  • the pan/tilt is rotated according to the first offset
  • the chassis rotation is driven according to the second offset; wherein the second offset is greater than the first offset.
  • FIG. 12 is a schematic structural diagram of another imaging apparatus according to an embodiment of the present disclosure. As shown in FIG. 12, the imaging module 40 further includes: a determining unit 45, a second driving unit 46, and a prompting unit 47.
  • the determining unit 45 is configured to: when the photographing instruction is acquired, determine that the framing screen conforms to the preset composition condition according to the relative position and the spatial distance, before controlling the smart device to perform shooting, when acquiring the photographing instruction, according to the relative position and The spatial distance determines whether the framing picture conforms to the preset composition condition.
  • the second driving unit 46 is configured to: if the relative position does not meet the preset composition condition, drive at least one of the chassis and the pan/tilt of the smart device according to the offset of the imaging region of the imaging object relative to the frame, until the imaging object The imaged area is in the composition frame.
  • the prompting unit 47 is configured to output the prompt information if it is determined that the spatial distance does not meet the preset composition condition, and return the identification unit to continue to recognize the spatial distance until the spatial distance belongs to the spatial distance range indicated by the preset composition condition.
  • the shooting module 40 further includes an instruction generating unit 48.
  • the instruction generating unit 48 is configured to generate a photographing instruction when the imaging object is in a stationary state according to the similarity between the preset number of preset view images.
  • the instruction generating unit 48 is configured to generate a photographing instruction, where the posture includes at least one of a gesture and an expression, in a case where it is determined that the posture of the imaging object conforms to the preset posture.
  • the command generating unit 48 is configured to generate a photographing instruction according to the user voice collected by the smart device.
  • the photographing apparatus of the embodiment of the present disclosure acquires a first screen in a field of view of the smart device, performs focus recognition on the first screen, determines an imaged object, and then acquires a second screen covering the surrounding environment of the smart device, and identifies the second screen from the second screen.
  • the target shooting position is further controlled to further control the smart device to guide the imaging object into the target shooting position, and control the smart device to shoot the imaging object. Therefore, after intelligently determining the imaging object and the target shooting position, the control smart device guides the imaging object into the target shooting position to further capture the imaging object, so that the user does not need to manually adjust the shooting position, thereby solving the cumbersome manual shooting operation.
  • the problem is to realize the intelligent selection of the best shooting position to shoot the imaged object, improve the imaging effect, simple and efficient, and the shooting mode is flexible, which improves the user experience.
  • the present disclosure also proposes a smart device
  • FIG. 13 is a schematic structural diagram of a smart device according to an embodiment of the present disclosure.
  • the smart device includes a memory 701, a processor 702, and a computer program stored on the memory 701 and operable on the processor 702.
  • the processor 702 executes the program, the implementation is as disclosed in the present disclosure.
  • the present disclosure also provides a computer program product that, when executed by an instruction processor in a computer program product, implements a photographing method as described in any of the preceding embodiments.
  • the present disclosure also provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the photographing method as described in any of the preceding embodiments.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing the steps of a custom logic function or process.
  • the scope of the preferred embodiments of the present disclosure includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in an inverse order depending on the functions involved, in the order shown or discussed. It will be understood by those skilled in the art to which the embodiments of the present disclosure pertain.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the present disclosure can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present disclosure have been shown and described above, it is understood that the foregoing embodiments are illustrative and are not to be construed as limiting the scope of the disclosure The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本公开提出一种拍摄方法、装置、智能设备及存储介质,其中,方法包括:获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象;获取覆盖智能设备周边环境的第二画面,从第二画面中识别出目标拍摄位置;控制智能设备引导成像对象进入目标拍摄位置;控制智能设备为成像对象进行拍摄。由此,在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,并对成像对象进行拍摄,使用户不需要手动调整拍摄位置,解决了手动拍摄操作繁琐的问题,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简单高效,提升了用户体验。

Description

拍摄方法、装置、智能设备及存储介质
相关申请的交叉引用
本公开要求北京猎户星空科技有限公司于2018年3月21日提交的、发明名称为“拍摄方法、装置、智能设备及存储介质”的、中国专利申请号“201810236367.1”的优先权。
技术领域
本公开涉及人工智能技术领域,尤其涉及一种拍摄方法、装置、智能设备及存储介质。
背景技术
随着人工智能技术的不断发展,人工智能产品,例如机器人不断普及,用户可以使用机器人进行拍摄。
相关技术中,用户需要反复调整拍摄位置,通过手动触发机器人的拍摄功能进行拍摄。
发明内容
为此,本公开的第一个目的在于提出一种拍摄方法,以实现在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,并对成像对象进行拍摄,使用户不需要手动调整拍摄位置,解决了用户需要反复调整拍摄位置,手动拍摄操作繁琐并且成像效果不可控的问题,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简单高效,提升了用户体验。
本公开的第二个目的在于提出一种拍摄装置。
本公开的第三个目的在于提出一种智能设备。
本公开的第四个目的在于提出一种非临时性计算机可读存储介质。
本公开的第五个目的在于提出一种计算机程序产品。
本公开第一方面实施例提出了一种拍摄方法,包括:
获取智能设备视野范围内的第一画面,对所述第一画面进行焦点识别,确定成像对象;
获取覆盖智能设备周边环境的第二画面,从所述第二画面中识别出目标拍摄位置;
控制所述智能设备引导所述成像对象进入所述目标拍摄位置;
控制所述智能设备为所述成像对象进行拍摄。
本公开实施例的拍摄方法,通过获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象,进而获取覆盖智能设备周边环境的第二画面,从第二画面中 识别出目标拍摄位置,进一步控制智能设备引导成像对象进入目标拍摄位置,并控制智能设备为成像对象进行拍摄。由此,在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,以进一步对成像对象进行拍摄,使用户不需要手动调整拍摄位置,解决了手动拍摄操作繁琐的问题,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简单高效,并且拍摄模式灵活,提升了用户体验。
另外,根据本公开上述实施例的拍摄方法还可以具有如下附加技术特征:
可选地,所述从所述第二画面中识别目标拍摄位置,包括:提取所述第二画面中的每个像素点的图像特征;根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息,将所述第一位置信息作为所述目标拍摄位置。
可选地,所述从所述第二画面中识别目标拍摄位置,包括:从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;根据所述目标区域内每个像素点在所述第二画面中的位置信息,确定所述目标区域在环境中的第二位置信息;将所述第二位置信息作为所述目标拍摄位置。
可选地,所述从所述第二画面中识别目标拍摄位置,包括:从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;提取所述目标区域中的每个像素点的图像特征;根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息;将所述第一位置信息作为所述目标拍摄位置。
可选地,所述控制所述智能设备引导所述拍摄对象进入所述目标拍摄位置,包括:确定所述智能设备与所述目标拍摄位置的之间的位置关系;其中,所述位置关系包括所述智能设备与所述目标拍摄位置之间的空间距离、所述智能设备与所述目标拍摄位置之间的角度中的至少一个;根据所述位置关系,控制所述智能设备向所述目标拍摄位置移动;向所述成像对象发出跟随指令,引导所述成像对象进入所述目标拍摄位置。
可选地,所述控制所述智能设备为所述成像对象进行拍摄,包括:获取所述智能设备采集的取景画面;识别成像对象的成像区域在所述取景画面中的相对位置,以及识别所述成像对象与所述智能设备之间的空间距离;当根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄。
可选地,所述识别成像对象的成像区域在所述取景画面中的相对位置之后,还包括:当所述相对位置未处于预设范围内时,根据所述相对位置,驱动所述智能设备的底盘和云台中的至少一个转动,以使所述成像对象的成像区域处于所述取景画面的预设范围内;其 中,所述预设范围,包括取景框内,或者构图框内,或者所述取景框和所述构图框之间的交叠区域,或者所述取景框和所述构图框覆盖区域;其中,所述构图框,用于指示所述取景画面中符合所述预设构图条件指示的相对位置。
可选地,所述驱动所述智能设备的底盘和云台中的至少一个移动,包括:若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第一偏移量,根据所述第一偏移量,驱动所述云台转动;若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第二偏移量,根据所述第二偏移量,驱动所述底盘转动;其中,所述第二偏移量大于所述第一偏移量。
可选地,所述当根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄之前,还包括:当获取到拍照指令时,根据所述相对位置和所述空间距离,判断所述取景画面是否符合预设构图条件;若判断出所述相对位置不符合所述预设构图条件,根据所述成像对象的成像区域相对构图框的偏移量,驱动所述智能设备的底盘和云台中的至少一个移动,直至所述成像对象的成像区域处于所述构图框内;若判断出所述空间距离不符合所述预设构图条件,输出提示信息,并继续识别所述空间距离,直至所述空间距离属于所述预设构图条件指示的空间距离范围。
可选地,所述获取到拍照指令,包括:根据最近采集到的预设个数的取景画面之间的相似性,确定所述成像对象处于静止状态的情况下,生成所述拍照指令。
可选地,所述预设构图条件指示的相对位置,包括:所述成像对象的成像区域处于所述取景框横向的中心;且,所述成像对象的成像区域不低于所述取景框纵向的预设高度。
可选地,所述识别所述成像对象与所述智能设备之间的空间距离,包括:根据所述成像区域的高度与所述成像对象的实际高度之间的比例关系以及图像传感器的焦距,确定所述成像对象与所述智能设备之间的空间距离;其中,所述图像传感器用于所述智能设备采集所述取景画面;或者,根据所述智能设备的深度摄像头采集到的深度数据,确定所述成像对象与所述智能设备之间的空间距离。
可选地,所述控制所述智能设备进行拍摄,包括:控制所述智能设备连续拍摄至少两帧图像;所述控制所述智能设备进行拍摄之后,还包括:根据图像质量,从所述至少两帧图像中选取用于预览展示的图像。
本公开第二方面实施例提出了一种拍摄装置,包括:
对象识别模块,用于获取智能设备视野范围内的第一画面,对所述第一画面进行焦点识别,确定成像对象;
位置识别模块,用于获取覆盖智能设备周边环境的第二画面,从所述第二画面中识别出目标拍摄位置;
引导模块,用于控制所述智能设备引导所述成像对象进入所述目标拍摄位置;
拍摄模块,用于控制所述智能设备为所述成像对象进行拍摄。
本公开实施例的拍摄装置,通过获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象,进而获取覆盖智能设备周边环境的第二画面,从第二画面中识别出目标拍摄位置,进一步控制智能设备引导成像对象进入目标拍摄位置,并控制智能设备为成像对象进行拍摄。由此,在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,以进一步对成像对象进行拍摄,使用户不需要手动调整拍摄位置,解决了手动拍摄操作繁琐的问题,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简单高效,并且拍摄模式灵活,提升了用户体验。
另外,根据本公开上述实施例的拍摄装置还可以具有如下附加技术特征:
可选地,所述位置识别模块,具体用于:提取所述第二画面中的每个像素点的图像特征;根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息,将所述第一位置信息作为所述目标拍摄位置。
可选地,所述位置识别模块,具体用于:从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;根据所述目标区域内每个像素点在所述第二画面中的位置信息,确定所述目标区域在环境中的第二位置信息;将所述第二位置信息作为所述目标拍摄位置。
可选地,所述位置识别模块,具体用于:从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;提取所述目标区域中的每个像素点的图像特征;根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息;将所述第一位置信息作为所述目标拍摄位置。
可选地,所述引导模块,具体用于:确定所述智能设备与所述目标拍摄位置的之间的位置关系;其中,所述位置关系包括所述智能设备与所述目标拍摄位置之间的空间距离、所述智能设备与所述目标拍摄位置之间的角度中的至少一个;根据所述位置关系,控制所述智能设备向所述目标拍摄位置移动;向所述成像对象发出跟随指令,引导所述成像对象进入所述目标拍摄位置。
可选地,所述拍摄模块,包括:获取单元,用于获取所述智能设备采集的取景画面;识别单元,用于识别成像对象的成像区域在所述取景画面中的相对位置,以及识别所述成像对象与所述智能设备之间的空间距离;拍摄单元,用于当根据所述相对位置和所述空间 距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄。
可选地,所述拍摄模块,还包括:第一驱动单元,用于在识别成像对象的成像区域在所述取景画面中的相对位置之后,当所述相对位置未处于预设范围内时,根据所述相对位置,驱动所述智能设备的底盘和云台中的至少一个转动,以使所述成像对象的成像区域处于所述取景画面的预设范围内;其中,所述预设范围,包括取景框内,或者构图框内,或者所述取景框和所述构图框之间的交叠区域,或者所述取景框和所述构图框覆盖区域;其中,所述构图框,用于指示所述取景画面中符合所述预设构图条件指示的相对位置。
可选地,所述第一驱动单元,具体用于:若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第一偏移量,根据所述第一偏移量,驱动所述云台转动;若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第二偏移量,根据所述第二偏移量,驱动所述底盘转动;其中,所述第二偏移量大于所述第一偏移量。
可选地,所述拍摄模块,还包括:判断单元、第二驱动单元和提示单元;所述判断单元,用于当获取到拍照指令时,根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄之前,当获取到拍照指令时,根据所述相对位置和所述空间距离,判断所述取景画面是否符合预设构图条件;第二驱动单元,用于若判断出所述相对位置不符合所述预设构图条件,根据所述成像对象的成像区域相对构图框的偏移量,驱动所述智能设备的底盘和云台中的至少一个移动,直至所述成像对象的成像区域处于所述构图框内;提示单元,用于若判断出所述空间距离不符合所述预设构图条件,输出提示信息,并返回所述识别单元继续识别所述空间距离,直至所述空间距离属于所述预设构图条件指示的空间距离范围。
可选地,所述拍摄模块,还包括:指令生成单元;所述指令生成单元,用于根据最近采集到的预设个数的取景画面之间的相似性,确定所述成像对象处于静止状态的情况下,生成所述拍照指令。
可选地,所述预设构图条件指示的相对位置,包括:所述成像对象的成像区域处于所述取景框横向的中心;且,所述成像对象的成像区域不低于所述取景框纵向的预设高度。
可选地,所述识别单元,具体用于:根据所述成像区域的高度与所述成像对象的实际高度之间的比例关系以及图像传感器的焦距,确定所述成像对象与所述智能设备之间的空间距离;其中,所述图像传感器用于所述智能设备采集所述取景画面;或者,根据所述智能设备的深度摄像头采集到的深度数据,确定所述成像对象与所述智能设备之间的空间距离。
可选地,所述拍摄单元,具体用于:控制所述智能设备连续拍摄至少两帧图像;所述控制所述智能设备进行拍摄之后,还包括:根据图像质量,从所述至少两帧图像中选取用 于预览展示的图像。
本公开第三方面实施例提出了一种智能设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如第一方面实施例所述的拍摄方法。
本公开第四方面实施例提出了一种计算机程序产品,其特征在于,当所述计算机程序产品中的指令处理器执行时实现如第一方面实施例所述的拍摄方法。
本公开第五方面实施例提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如第一方面实施例所述的拍摄方法。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开上述的和附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开实施例所提供的一种拍摄方法的流程示意图;
图2为本公开实施例所提供的一种识别目标拍摄位置方法的流程示意图;
图3为本公开实施例所提供的另一种识别目标拍摄位置方法的流程示意图;
图4为本公开实施例所提供的另一种识别目标拍摄位置方法的流程示意图;
图5为本公开实施例所提供的另一种拍摄方法的流程示意图;
图6为透视理论的原理示意图;
图7为本公开实施例所提供的另一种拍摄方法的流程示意图;
图8为本公开实施例中预设姿态示意图一;
图9为本公开实施例中预设姿态示意图二;
图10为本公开实施例所提供的一种拍摄装置的结构示意图;
图11为本公开实施例所提供的另一种拍摄装置的结构示意图;
图12为本公开实施例所提供的另一种拍摄装置的结构示意图;
图13为本公开实施例所提供的智能设备的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图描述本公开实施例的拍摄方法、装置、智能设备及存储介质。
图1为本公开实施例所提供的一种拍摄方法的流程示意图,如图1所示,该方法包括:
步骤101,获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象。
其中,智能设备包括但不限于智能手机、摄像机、平板电脑、智能机器人等设备。
本实施例中,智能设备上配置有图像传感器,如摄像头,由智能设备控制器启动智能设备的焦点跟随功能。具体地,可以通过智能设备上的摄像头获取智能设备视野范围内的第一画面。在获取第一画面后,可对第一画面进行检测,以识别进入监控范围的目标。其中,这里的目标,可以理解为人。以识别第一画面中的人为例,智能设备可通过人脸检测或者人体检测,识别处于第一画面中的人。具体而言,从第一画面中提取物体的轮廓,将提取的物体轮廓与预存的人脸轮廓或人体轮廓,进行比对。当提取的轮廓与预设的轮廓之间的相似度超过预设的阈值,可以认为从第一画面中识别到了人。或者,提取基于人脸检测技术从第一画面中,识别人脸。在识别人体或者人脸后,则确定视野范围内存在目标,然后将识别出的目标作为成像对象。
步骤102,获取覆盖智能设备周边环境的第二画面,从第二画面中识别出目标拍摄位置。
可以理解,为了使拍摄的图像更加美观,需要寻找合适的位置进行拍摄。例如,在光线条件好的位置或无障碍物视野好的空旷区域进行拍摄,获得的图像往往更加美观,更符合用户期望。因此,在确定成像对象之后,还需要确定拍摄位置,以进一步在拍摄位置对成像对象进行拍摄。
作为一种可能的实现方式,可以在智能设备上同时开启前后摄像头进行360°的拍摄,以获取覆盖智能设备周边环境的第二画面。
在本公开的一个实施例中,可以将光线条件好的位置作为目标拍摄位置。具体地,获取覆盖智能设备周边环境的第二画面,进而提取第二画面中的每个像素点的图像特征(例如亮度特征、颜色特征、纹理特征等),进一步根据每个像素点的图像特征,识别出第二画面中满足预设条件的像素点,进一步根据满足预设条件的像素点在第二画面中的位置信息,确定目标拍摄位置。
在本公开的一个实施例中,可以将无障碍物的空旷区域作为目标拍摄位置。具体地,获取覆盖智能设备周边环境的第二画面,进而从第二画面中识别不存在遮挡物的区域,进而根据目标区域在第二画面中的位置信息,确定目标拍摄位置。
在本公开的一个实施例中,可以综合考虑光照条件和障碍物情况,将光线条件好的空旷区域作为目标拍摄位置。具体地,获取覆盖智能设备周边环境的第二画面,进而从第二 画面中识别不存在遮挡物的目标区域,进一步提取目标区域中每个像素点的图像特征,进一步根据每个像素点的图像特征,识别出满足预设条件的像素点,进一步根据满足预设条件的像素点在第二画面中的位置信息,确定目标拍摄位置。
由此,实现了智能的选择最佳拍摄位置,简单高效,提升了用户体验。
步骤103,控制智能设备引导成像对象进入目标拍摄位置。
在获取到目标拍摄位置后,可以提取到目标拍摄位置在第二画面中的位置,基于该位置和成像法则,得到目标拍摄位置与智能设备之间的位置关系。在确定了位置关系,可以控制智能设备向目标拍照位置移动,以引导成像对象进入目标拍摄位置。或者,控制智能设备播报位置关系,以引导成像对象进入目标拍摄位置,例如播放前方45°方向,距离2m处为目标拍摄位置。
作为一种示例,确定智能设备与目标拍摄位置的之间的空间距离,进而根据空间距离,控制智能设备向目标拍摄位置移动,进一步向成像对象发出跟随指令,引导成像对象进入目标拍摄位置。
作为另一种示例,确定智能设备与目标拍摄位置的之间的角度,进而根据角度,控制智能设备向目标拍摄位置移动,进一步向成像对象发出跟随指令,引导成像对象进入目标拍摄位置。
其中,向成像对象发出跟随指令的形式包括但不限于语音指令、文字指令等。
本实施例中,在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,以进一步对成像对象进行拍摄,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简化了拍摄过程的操作步骤,提升了用户体验。
步骤104,控制智能设备为成像对象进行拍摄。
当成像对象进入目标拍摄位置后,就可以控制智能设备为成像对象进行拍摄。作为一种示例,可以实时从采集的画面中,识别是否有成像对象进入到目标拍摄位置,当识别出到则自动启动拍摄功能。作为一种示例,可以由成像对象发出语音指令或者设定动作,启动拍摄功能。
本公开的实施例的拍摄方法,通过获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象,进而获取覆盖智能设备周边环境的第二画面,从第二画面中识别出目标拍摄位置,进一步控制智能设备引导成像对象进入目标拍摄位置,并控制智能设备为成像对象进行拍摄。由此,在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,以进一步对成像对象进行拍摄,使用户不需要手动调整拍摄位置、距离、角度等,解决了传统手动拍摄操作繁琐的问题,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简单高效,并且拍摄模式灵活, 提升了用户体验。
基于上述实施例,下面对如何识别光线最佳的目标拍摄位置进行详细说明。
图2为本公开实施例所提供的一种识别目标拍摄位置方法的流程示意图,如图2所示,该方法包括:
步骤201,提取第二画面中的每个像素点的图像特征。
其中,图像特征包括但不限于颜色特征、亮度特征、纹理特征等。
步骤202,根据每个像素点的图像特征,识别出图像特征满足预设的图像特征条件的至少一个第一像素点。
可以理解,在拍摄过程中如果环境光线条件较暗,容易出现欠曝光,如果环境光线条件较亮,容易出现过曝光,因此,需要选择光线条件合适的位置作为拍摄位置。下面以亮度特征为例进行举例说明:
作为一种示例,可以预先设置第一阈值和第二阈值,其中,第一阈值小于第二阈值。进而,根据每个像素点的亮度特征,筛选出亮度特征大于等于第一阈值且小于等于第二阈值的像素点,作为第一像素点。
作为另一种示例,可以预先设置一个第三阈值,进而,根据每个像素点的亮度特征,筛选出亮度特征最接近第三阈值的像素点,作为第一像素点。
由于第一像素点的图像特征满足预设的图像特征条件,因此,在环境中第一像素点对应的位置进行拍摄,就能拍摄出质量较高的图像。
步骤203,根据第一像素点在第二画面中的位置信息,确定第一像素点在环境中的第一位置信息,将第一位置信息作为目标拍摄位置。
其中,位置信息包括但不限于坐标信息、距离信息、方向信息等。通过第一像素点在第二画面中的位置信息,可以确定环境中第一像素点对应的第一位置与智能设备之间的距离和方向,进而将第一位置作为目标拍摄位置,并引导成像对象进入目标拍摄位置。
由此,根据第二画面中像素点的图像特征,识别出满足预设条件的第一像素点,进一步根据第一像素点的位置信息,确定光线最佳的拍摄位置。
基于上述实施例,下面对如何识别无遮挡物的空旷区域作为目标拍摄位置进行详细说明。
图3为本公开实施例所提供的另一种识别目标拍摄位置方法的流程示意图,如图3所示,该方法包括:
步骤301,从第二画面中识别不存在遮挡物的目标区域;其中,目标区域的面积大于或者等于预设的面积阈值。
可以理解,在空旷的区域进行拍摄得到的图像往往更加美观,因此,可以选择不存在 遮挡物的区域作为拍摄位置。
在本公开的一个实施例中,可以通过图像处理技术,将第二画面进行灰度化处理,进而在基于单通道灰度图像上进行障碍物检测,由此可以识别出不存在遮挡物的目标区域。
进一步,还可以对目标区域的面积进行识别,当目标区域面积过小时,可能在拍摄时会出现周围障碍物遮挡的情况。因此,可以预设一个面积阈值,通过将目标区域的面积和面积阈值进行比较,从而筛选出面积大于或者等于面积阈值的目标区域。
由此,可以在第二画面中识别出不存在遮挡物的目标区域。
步骤302,根据目标区域内每个像素点在第二画面中的位置信息,确定目标区域在环境中的第二位置信息。
具体地,根据目标区域内每个像素点在第二画面中的位置信息,通过成像原理,可以确定每个像素点在环境中对应的实际位置的位置信息,将这些实际位置信息进行组合,就可以得到目标区域在环境中的第二位置信息。
其中,位置信息包括但不限于坐标信息、距离信息、方向信息等。
需要说明的是,前述对于步骤203的解释说明同样适用于本实施例中的步骤302,此处不再赘述。
步骤303,将第二位置信息作为目标拍摄位置。
具体地,根据第二位置信息确定目标拍摄位置,以进一步引导成像对象进入目标拍摄位置。
由此,通过从第二画面中识别不存在遮挡物的目标区域,进而根据目标区域内每个像素点在第二画面中的位置信息,确定目标区域在环境中的第二位置信息,进一步根据第二位置信息确定不存在遮挡物的最佳拍摄位置。
基于上述实施例,还可以结合光线条件和遮挡物条件确定出最佳拍摄位置。
图4为本公开实施例所提供的另一种识别目标拍摄位置方法的流程示意图,如图4所示,该方法包括:
步骤401,从第二画面中识别不存在遮挡物的目标区域;其中,目标区域的面积大于或者等于预设的面积阈值。
需要说明的是,前述实施例对于步骤301的解释说明同样适用于步骤401,此处不再赘述。
步骤402,提取目标区域中的每个像素点的图像特征。
其中,图像特征包括但不限于颜色特征、亮度特征、纹理特征等。
步骤403,根据每个像素点的图像特征,识别出图像特征满足预设的图像特征条件的至少一个第一像素点。
需要说明的是,前述实施例对于步骤202的解释说明同样适用于步骤403,此处不再赘述。
步骤404,根据第一像素点在第二画面中的位置信息,确定第一像素点在环境中的第一位置信息。
本实施例中,由于目标区域已经是不存在遮挡物适合拍摄的区域,因此,在目标区域内根据每个像素点的图像特征,选取出满足预设图像特征条件的第一像素点,第一像素点在环境中对应的第一位置信息,就是光线条件良好并且无遮挡物的空旷区域。
步骤405,将第一位置信息作为目标拍摄位置。
在本公开的一个实施例中,当获得多个第一像素点时,还可以对第一像素点在环境中的第一位置信息进行识别分析,将距离智能设备最近的第一位置信息作为最佳拍摄位置。
由此,通过首先对目标区域进行识别,进而根据目标区域中每个像素点的图像特征识别出第一像素点,根据第一像素点的位置信息确定环境中对应的第一位置信息,并作为目标拍摄位置,实现了智能的选择光线良好且无遮挡物的最佳拍摄位置。
基于上述实施例,为了向成像对象提供更好地成像效果,还可以由智能设备控制器为成像对象进行智能构图。具体地,根据成像对象在取景画面中的相对位置和与智能设备之间的空间距离,自动确定取景画面是否符合预设构图条件,只有当符合预设构图条件时,才控制智能设备进行拍摄,可以有效保障成像质量,提升成像效果。
在上述情况下,步骤104的具体处理步骤如图5所示,图5为本公开实施例所提供的另一种拍摄方法的流程示意图,该拍摄方法包括:
步骤501,获取智能设备采集的取景画面。
本公开实施例中,可以通过智能设备中的图像传感器采集取景画面。其中,图像传感器可以为可见光图像传感器,或者,图像传感器可以包括可见光图像传感器和结构光图像传感器。可见光图像传感器利用成像对象反射的可见光进行成像,得到可见光图像;结构光图像传感器可以根据成像对象反射的结构光成像,得到结构光图像。
具体地,在智能设备的拍照功能被唤醒后,可以通过智能设备中的图像传感器采集取景画面,而后,图像传感器可以将采集的取景画面发送至智能设备控制器,相应地,智能设备控制器可以获取取景画面。
步骤502,识别成像对象的成像区域在取景画面中的相对位置,以及识别成像对象与智能设备之间的空间距离。
具体地,可以识别成像对象的成像区域在取景画面中的图像特征,而后将识别出的图像特征输入至预先训练的图像特征识别模型,确定成像区域在取景画面中的相对位置。其中,图像特征识别模型是经过预先训练的,具体地,可以选取样本图像,而后基于样本图 像的图像特征,对样本图像中各个物体进行标注,利用标注过的样本图像训练图像特征识别模型。利用训练好的模型对取景画面识别成像对象,在取景画面中,若识别到的成像对象,确定识别出成像对象的成像区域在取景画面中的相对位置。
例如,可以基于物体识别技术,识别成像对象在的成像区域在取景画面中的图像特征,而后将识别出的图像特征输入至预先训练的图像特征识别模型,即可确定成像区域在取景画面中的相对位置。
作为一种可能的实现方式,可以根据成像区域的高度与成像对象的实际高度之间的比例关系以及图像传感器的焦距,确定成像对象与智能设备之间的空间距离。
具体地,可以基于透视理论,确定成像对象与智能设备之间的空间距离。例如,参见图6,图6为透视理论的原理示意图。根据相似三角形AOB和COD,可以得到:
Figure PCTCN2019078258-appb-000001
其中,底片与镜头的距离为图像传感器的焦距,标记焦距为f,成像对象的实际高度为H,成像区域的高度为h,成像对象与智能设备之间的空间距离为d,则可以得到:
Figure PCTCN2019078258-appb-000002
作为另一种可能的实现方式,智能设备可以包括深度摄像头,可以通过深度摄像头采集成像对象对应的深度数据,而后根据深度数据确定成像对象与智能设备之间的空间距离。
步骤503,当根据相对位置和空间距离,确定取景画面符合预设构图条件时,控制智能设备进行拍摄。
本公开实施例中,预设构图条件为预先设置的。可选地,为了保证构图质量,提升成像质量,预设构图条件可以包括:成像对象的成像区域处于取景框横向的中心。
进一步地,为了提升成像效果,预设构图条件还可以包括:成像对象的成像区域不低于取景框纵向的预设高度。其中,预设高度为预先设置的,例如预设高度可以为智能设备的内置程序预先设置的,或者预设高度可以由用户进行设置,比如预设高度可以为1/3,对此不作限制。
此外,为了避免成像对象距离智能设备太近,而导致取景画面中只有成像对象的局部区域或成像对象的成像区域过大,从而导致成像效果较差,或者,为了避免成像对象距离智能设备太远,而导致取景画面中成像对象的成像区域太小,从而导致成像效果较差,本公开实施例中,预设构图条件还可以包括:成像对象与智能设备之间的空间距离不低于预设的空间距离范围。其中,预设的空间距离范围为预先设置的,例如预设的空间距离范围可以为智能设备的内置程序预先设置的,或者预设的空间距离范围可以由用户进行设置,可选地,标记预设的空间距离范围为[a,b],例如[a,b]可以为[0.5,3]米,对此不作限 制。
本公开实施例中,当识别出成像对象的成像区域在取景画面中的相对位置以及成像对象与智能设备之间的空间距离后,可以当根据相对位置和空间距离,确定取景画面是否符合预设构图条件,当符合预设构图条件时,表明此时的构图质量较佳,因此可以控制智能设备进行拍摄,而当未符合预设构图条件时,表明此时的构图质量并未达到最优,为了避免降低成像效果以及用户体验,本公开实施例中,可以不控制智能设备进行拍摄。
本实施例的拍摄方法,通过获取智能设备采集的取景画面,而后识别成像对象的成像区域在取景画面中的相对位置以及成像对象与智能设备之间的空间距离,只有当根据相对位置和空间距离,确定取景画面符合预设构图条件时,才控制智能设备进行拍摄。本实施例中,无需用户自行调整站位以及确认预览画面是否符合预期,简化拍照过程中的操作步骤,提升用户体验,以及提高拍照效率。此外,由智能设备控制器根据成像对象在取景画面中的相对位置和与智能设备之间的空间距离,自动确定取景画面是否符合预设构图条件,只有当符合预设构图条件时,才控制智能设备进行拍摄,可以有效保障成像质量,提升成像效果。
为了清楚说明上一实施例,本公开实施例提供了另一种拍摄方法,图7为本公开实施例所提供的另一种拍摄方法的流程示意图,如图7所示,该拍摄方法可以包括以下步骤:
步骤601,获取智能设备采集的取景画面。
步骤602,识别成像对象的成像区域在取景画面中的相对位置。
步骤601~602的执行过程可以参见上述实施例中步骤501~502的执行过程,在此不做赘述。
步骤603,判断相对位置是否处于预设范围内,若是,执行步骤605,否则,执行步骤604。
一般情况下,为了获得较佳的成像效果,成像对象的成像区域在取景画面中的相对位置需处于预设范围内。需要说明的是,一般情况下,构图框位于取景框中,当构图框位于取景框中时,预设范围可以包括取景框内,或者构图框内。其中,构图框,用于指示取景画面中符合预设构图条件指示的相对位置。然而,实际应用时,可能存在构图框未完全位于取景框中的情况,此时,预设范围还可以包括取景框和构图框之间的交叠区域,或者取景框和构图框覆盖区域。
因此,本公开实施例中,可以判断相对位置是否处于预设范围内,若处于预设范围内,且未获取到拍照指令,则返回执行步骤601,若未处于预设范围内,无论是否获取到拍照指令,则触发步骤604。
步骤604,根据相对位置,驱动智能设备的底盘和云台中的至少一个转动,以使成像 对象的成像区域处于取景画面的预设范围内。
一般情况下,当成像对象的成像区域相对于预设范围的偏移量较小时,可以通过驱动智能设备的云台转动,使得成像对象的成像区域处于取景画面的预设范围内。而当成像对象的成像区域相对于预设范围的偏移量较大时,此时,可以通过驱动智能设备的底盘转动,使得成像对象的成像区域处于取景画面的预设范围内。
因此,本公开实施例中,当取景画面中,所述成像对象的成像区域超出所述预设范围达到第一偏移量时,根据所述第一偏移量,驱动所述云台转动,而当成像对象的成像区域超出所述预设范围达到第二偏移量时,根据所述第二偏移量,驱动所述底盘转动。其中,所述第二偏移量大于所述第一偏移量。
本公开实施例中,第一偏移量和第二偏移量均为预先设置的,例如第一偏移量(或者第二偏移量)可以为智能设备的内置程序预先设置的,或者第一偏移量(或者第二偏移量)可以由用户进行设置,对此不作限制。
步骤605,判断是否获取到拍照指令,若是,执行步骤606,否则,执行步骤601。
现有技术中,通过用户手动触发机器人的拍照功能,即机器人的拍照功能为被动触发式,例如优必选-克鲁泽机器人、康力优蓝-U05机器人等,拍照方式单一。
而本公开实施例中,当用户处于静止状态时,可以自动触发智能设备的拍照功能。智能设备控制器可以识别成像对象是否处于静止状态,当确定成像对象处于静止状态的情况下,可以自动生成拍照指令。
作为一种可能的实现方式,智能设备控制器可以根据最近采集到的预设个数的取景画面之间的相似性,确定成像对象处于静止状态。其中,预设个数为预先设置的,例如预设个数可以为智能设备的内置程序预先设定的,或者,预设个数也可以由用户进行设置,对此不作限制。举例而言,当预设个数为5个时,如果最近采集到的5个以上的取景画面的相似性较高,此时,则可以确定成像对象处于静止状态。
进一步地,为了提升智能设备与用户之间的互动性,在生成拍照指令时,还可以生成文字和语音提示信息中的至少一种,以提示用户做好拍照准备,例如提示信息可以为“我要拍照啦,321茄子!”。
作为另一种可能的实现方式,智能设备控制器可以识别成像对象的姿态是否符合预设姿态,其中,预设姿态可以由用户进行设置,或者,预设姿态可以为智能设备的内置程序预先设定的,对此不作限制;成像对象的姿态可以包括手势和表情中的至少一个。当确定所述成像对象的姿态符合预设姿态的情况下时,可以生成所述拍照指令。
需要说明的是,当预设姿态为手势时,预设姿态可以包括单手做出的手势和双手做出的手势。当为单手做出的手势时,为了提升智能设备控制器识别效率,可以不分左手做出 的手势还是右手做出的手势。举例而言,参见图8,当预设姿势为“比心”时,无论成像对象是通过左手做出的“比心”动作,还是右手做出的“比心”动作,均确定该成像对象的姿态符合预设姿态。而当为双手做出的手势时,成像对象需准确做出预设姿态。举例而言,参见图9,当预设姿态为“感谢”时,此时,成像对象需右手握拳,左手张开,并将左手掌心覆盖在右拳上。
进一步地,为了提升智能设备与用户之间的互动性,在生成拍照指令时,还可以生成语音提示信息,例如,提示信息可以为“这个pose(或者表情)不错哦,321茄子!”。
作为又一种可能的实现方式,还可以通过用户语音触发自动拍照,智能设备控制器可以根据所述智能设备采集到的用户语音,生成所述拍照指令。
进一步地,为了提升智能设备与用户之间的互动性,在生成拍照指令时,还可以生成语音提示信息,例如,当用户站好后,可以提示用户“拍照喽!”。
本公开实施例中,可以通过不同方式,触发智能设备自动拍照,在丰富拍照方式的基础上,有效提升用户的拍照体验。若获取到拍照指令,则可以执行步骤606,若未获取到拍照指令,则返回执行步骤601。
步骤606,判断相对位置是否符合预设构图条件,若是,执行步骤608,否则,执行步骤607。
本公开实施例中,当智能设备控制器获取到拍照指令时,可以判断相对位置是否符合预设构图条件,具体地,可以判断成像对象的成像区域是否处于所述取景框横向的中心,同时,判断成像对象的成像区域是否不低于所述取景框纵向的预设高度。只有当成像对象的成像区域处于所述取景框横向的中心,且成像对象的成像区域不低于所述取景框纵向的预设高度时,确定相对位置符合预设构图条件。
步骤607,根据成像对象的成像区域相对构图框的偏移量,驱动智能设备的底盘和云台中的至少一个移动,直至成像对象的成像区域处于构图框内。
具体地,当成像对象的成像区域相对构图框的偏移量较小时,可以通过驱动智能设备的云台转动,直至成像对象的成像区域处于构图框内。而当成像对象的成像区域相对构图框的偏移量较大时,此时,可以通过驱动智能设备的底盘转动,直至成像对象的成像区域处于构图框内。
作为一种可能的实现方式,当判断出相对位置不符合预设构图条件时,还可以输出语音和文字提示信息中的至少一种,由成像对象根据提示信息,移动身体,使得成像区域处于构图框内。
举例而言,当成像对象的成像区域未处于取景框横向的中心时,例如当成像对象的成像区域位于取景框的左侧时,可以语音提示:请往右侧走两步。而当成像对象的成像区域 位于取景框的右侧时,可以语音提示:请往左侧走两步。或者,当成像对象的成像区域低于取景框纵向的预设高度时,可以语音提示:抬首挺胸,站直些!或者为:请往前走两步!
在输出提示信息后,智能设备控制器可以继续识别成像对象的成像区域在取景画面中的相对位置,即重新触发步骤606及后续步骤。
步骤608,识别成像对象与智能设备之间的空间距离,判断空间距离是否符合预设构图条件,若是,执行步骤610-611,否则,执行步骤609。
需要说明的是,本公开实施例中,步骤608是在步骤606之后执行的,但是本公开并不限于此,步骤608还可以在步骤606之前执行,或者,步骤608可以和步骤606并列执行。
需要说明的是,为了避免成像对象距离智能设备太近,而导致取景画面中只有成像对象的局部区域或成像对象的成像区域过大,从而导致成像效果较差,或者,为了避免成像对象距离智能设备太远,而导致取景画面中成像对象的成像区域太小,从而导致成像效果较差,本公开实施例中,预设构图条件还可以包括:成像对象与智能设备之间的空间距离不低于预设的空间距离范围。
因此,可以判断空间距离是否符合预设构图条件,即确定成像对象与智能设备之间的空间距离是否低于预设构图条件指示的空间距离范围,若是,则触发步骤609,否则,执行步骤610。
步骤609,输出提示信息,并继续识别空间距离,直至空间距离属于预设构图条件指示的空间距离范围。
本公开实施例中,当识别出的空间距离不属于预设构图条件指示的空间距离范围时,可以输出语音和文字提示信息中的至少一种。
举例而言,标记预设构图条件指示的空间距离范围为[a,b],当成像对象距离与智能设备之间的空间距离小于a时,此时,表明成像对象距离智能设备太近,此时,可以输出语音信息:有点近了,退后一点拍照效果更佳。而当成像对象距离与智能设备之间的空间距离大于b时,此时,表明成像对象距离智能设备太远,此时,可以输出语音信息:有点远了,请往前两步。
在输出提示信息后,智能设备控制器可以继续识别空间距离,即重新触发步骤608及后续步骤。
步骤610,控制智能设备连续拍摄至少两帧图像。
本公开实施例中,当根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,表明此时的构图质量较佳,因此可以控制智能设备进行拍摄。可选地,为了进一步保障成像质量,可以控制智能设备连续拍摄至少两帧图像,从而后续可以从至少两 帧图像中选取图像质量最佳的图像进行展示。
步骤611,根据图像质量,从至少两帧图像中选取用于预览展示的图像。
可选地,可以从至少两帧图像中选取图像质量最佳的图像进行展示,从而用户可以发送或者下载图像质量最佳的图像,有效保障成像质量,保障成像效果,提升用户的拍照体验。
本实施例的拍摄方法,通过获取智能设备采集的取景画面,而后识别成像对象的成像区域在取景画面中的相对位置以及成像对象与智能设备之间的空间距离,只有当根据相对位置和空间距离,确定取景画面符合预设构图条件时,才控制智能设备进行拍摄。本实施例中,无需用户自行调整站位以及确认预览画面是否符合预期,简化拍照过程中的操作步骤,提升用户体验,以及提高拍照效率。此外,由智能设备控制器根据成像对象在取景画面中的相对位置和与智能设备之间的空间距离,自动确定取景画面是否符合预设构图条件,只有当符合预设构图条件时,才控制智能设备进行拍摄,可以有效保障成像质量,提升成像效果。
为了实现上述实施例,本公开还提出一种拍摄装置。图10为本公开实施例所提供的一种拍摄装置的结构示意图,如图9所示,该拍摄装置包括:对象识别模块10,位置识别模块20,引导模块30,拍摄模块40。
其中,对象识别模块10,用于获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象。
位置识别模块20,用于获取覆盖智能设备周边环境的第二画面,从第二画面中识别出目标拍摄位置。
引导模块30,用于控制智能设备引导成像对象进入目标拍摄位置。
拍摄模块40,用于控制智能设备为成像对象进行拍摄。
进一步地,位置识别模块20,具体用于:
提取第二画面中的每个像素点的图像特征;
根据每个像素点的图像特征,识别出图像特征满足预设的图像特征条件的至少一个第一像素点;
根据第一像素点在第二画面中的位置信息,确定第一像素点在环境中的第一位置信息,将第一位置信息作为目标拍摄位置。
进一步地,位置识别模块20,具体用于:
从第二画面中识别不存在遮挡物的目标区域;其中,目标区域的面积大于或者等于预设的面积阈值;
根据目标区域内每个像素点在第二画面中的位置信息,确定目标区域在环境中的第二 位置信息;
将第二位置信息作为目标拍摄位置。
进一步地,位置识别模块20,具体用于:
从第二画面中识别不存在遮挡物的目标区域;其中,目标区域的面积大于或者等于预设的面积阈值;
提取目标区域中的每个像素点的图像特征;
根据每个像素点的图像特征,识别出图像特征满足预设的图像特征条件的至少一个第一像素点;
根据第一像素点在第二画面中的位置信息,确定第一像素点在环境中的第一位置信息;
将第一位置信息作为目标拍摄位置。
进一步地,引导模块30,具体用于:
确定智能设备与所述目标拍摄位置的之间的位置关系;其中,位置关系包括智能设备与目标拍摄位置之间的空间距离、智能设备与目标拍摄位置之间的角度中的至少一个;
根据位置关系,控制智能设备向目标拍摄位置移动;
向成像对象发出跟随指令,引导成像对象进入目标拍摄位置。
图11为本公开实施例所提供的另一种拍摄装置的结构示意图,如图11所示,拍摄模块40还包括:获取单元41,识别单元42,拍摄单元43。
其中,获取单元41,用于获取智能设备采集的取景画面;
识别单元42,用于识别成像对象的成像区域在取景画面中的相对位置,以及识别成像对象与智能设备之间的空间距离;
拍摄单元43,用于当根据相对位置和空间距离,确定取景画面符合预设构图条件时,控制智能设备进行拍摄。
进一步地,拍摄模块40,还包括:
第一驱动单元44,用于在识别成像对象的成像区域在所述取景画面中的相对位置之后,当相对位置未处于预设范围内时,根据相对位置,驱动智能设备的底盘和云台中的至少一个转动,以使成像对象的成像区域处于取景画面的预设范围内;
其中,预设范围,包括取景框内,或者构图框内,或者取景框和构图框之间的交叠区域,或者取景框和构图框覆盖区域;其中,构图框,用于指示取景画面中符合预设构图条件指示的相对位置。
进一步地,第一驱动单元44,具体用于:
若取景画面中,成像对象的成像区域超出预设范围达到第一偏移量,根据第一偏移量,驱动云台转动;
若取景画面中,成像对象的成像区域超出预设范围达到第二偏移量,根据第二偏移量,驱动底盘转动;其中,第二偏移量大于第一偏移量。
图12为本公开实施例所提供的另一种拍摄装置的结构示意图,如图12所示,拍摄模块40还包括:判断单元45、第二驱动单元46和提示单元47。
其中,判断单元45,用于当获取到拍照指令时,根据相对位置和空间距离,确定取景画面符合预设构图条件时,控制智能设备进行拍摄之前,当获取到拍照指令时,根据相对位置和空间距离,判断取景画面是否符合预设构图条件。
第二驱动单元46,用于若判断出相对位置不符合预设构图条件,根据成像对象的成像区域相对构图框的偏移量,驱动智能设备的底盘和云台中的至少一个移动,直至成像对象的成像区域处于构图框内。
提示单元47,用于若判断出空间距离不符合预设构图条件,输出提示信息,并返回识别单元继续识别空间距离,直至空间距离属于预设构图条件指示的空间距离范围。
进一步地,拍摄模块40,还包括:指令生成单元48。
作为一种可能的实现方式,指令生成单元48,用于根据最近采集到的预设个数的取景画面之间的相似性,确定成像对象处于静止状态的情况下,生成拍照指令。
作为一种可能的实现方式,指令生成单元48,用于确定成像对象的姿态符合预设姿态的情况下,生成拍照指令,姿态包括手势和表情中的至少一个。
作为一种可能的实现方式,指令生成单元48,用于根据智能设备采集到的用户语音,生成拍照指令。
需要说明的是,前述实施例对拍摄方法的解释说明同样适用于本实施例的拍摄装置,此处不再赘述。
本公开实施例的拍摄装置,通过获取智能设备视野范围内的第一画面,对第一画面进行焦点识别,确定成像对象,进而获取覆盖智能设备周边环境的第二画面,从第二画面中识别出目标拍摄位置,进一步控制智能设备引导成像对象进入目标拍摄位置,并控制智能设备为成像对象进行拍摄。由此,在智能的确定成像对象以及目标拍摄位置之后,控制智能设备引导成像对象进入目标拍摄位置,以进一步对成像对象进行拍摄,使用户不需要手动调整拍摄位置,解决了手动拍摄操作繁琐的问题,实现了智能的选择最佳拍摄位置对成像对象进行拍摄,提高了成像效果,简单高效,并且拍摄模式灵活,提升了用户体验。
为了实现上述实施例,本公开还提出一种智能设备,
图13为本公开实施例所提供的智能设备的结构示意图。
如图13所示,该智能设备包括:存储器701、处理器702及存储在存储器701上并可在处理器702上运行的计算机程序,所述处理器702执行所述程序时,实现如本公开前述 任一实施例提所述的拍摄方法。
为了实现上述实施例,本公开还提出一种计算机程序产品,当计算机程序产品中的指令处理器执行时实现如前述任一实施例所述的拍摄方法。
为了实现上述实施例,本公开还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述任一实施例所述的拍摄方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行 编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本公开各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (28)

  1. 一种拍摄方法,其特征在于,所述方法包括以下步骤:
    获取智能设备视野范围内的第一画面,对所述第一画面进行焦点识别,确定成像对象;
    获取覆盖智能设备周边环境的第二画面,从所述第二画面中识别出目标拍摄位置;
    控制所述智能设备引导所述成像对象进入所述目标拍摄位置;
    控制所述智能设备为所述成像对象进行拍摄。
  2. 根据权利要求1所述的拍摄方法,其特征在于,所述从所述第二画面中识别目标拍摄位置,包括:
    提取所述第二画面中的每个像素点的图像特征;
    根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;
    根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息,将所述第一位置信息作为所述目标拍摄位置。
  3. 根据权利要求1所述的拍摄方法,其特征在于,所述从所述第二画面中识别目标拍摄位置,包括:
    从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;
    根据所述目标区域内每个像素点在所述第二画面中的位置信息,确定所述目标区域在环境中的第二位置信息;
    将所述第二位置信息作为所述目标拍摄位置。
  4. 根据权利要求1所述的拍摄方法,其特征在于,所述从所述第二画面中识别目标拍摄位置,包括:
    从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;
    提取所述目标区域中的每个像素点的图像特征;
    根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;
    根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息;
    将所述第一位置信息作为所述目标拍摄位置。
  5. 根据权利要求1-4任一项所述的拍摄方法,其特征在于,所述控制所述智能设备引 导所述拍摄对象进入所述目标拍摄位置,包括:
    确定所述智能设备与所述目标拍摄位置的之间的位置关系;其中,所述位置关系包括所述智能设备与所述目标拍摄位置之间的空间距离、所述智能设备与所述目标拍摄位置之间的角度中的至少一个;
    根据所述位置关系,控制所述智能设备向所述目标拍摄位置移动;
    向所述成像对象发出跟随指令,引导所述成像对象进入所述目标拍摄位置。
  6. 根据权利要求1-5任一项所述的拍摄方法,其特征在于,所述控制所述智能设备为所述成像对象进行拍摄,包括:
    获取所述智能设备采集的取景画面;
    识别成像对象的成像区域在所述取景画面中的相对位置,以及识别所述成像对象与所述智能设备之间的空间距离;
    当根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄。
  7. 根据权利要求6所述的拍摄方法,其特征在于,所述识别成像对象的成像区域在所述取景画面中的相对位置之后,还包括:
    当所述相对位置未处于预设范围内时,根据所述相对位置,驱动所述智能设备的底盘和云台中的至少一个转动,以使所述成像对象的成像区域处于所述取景画面的预设范围内;
    其中,所述预设范围,包括取景框内,或者构图框内,或者所述取景框和所述构图框之间的交叠区域,或者所述取景框和所述构图框覆盖区域;其中,所述构图框,用于指示所述取景画面中符合所述预设构图条件指示的相对位置。
  8. 根据权利要求7所述的拍摄方法,其特征在于,所述驱动所述智能设备的底盘和云台中的至少一个移动,包括:
    若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第一偏移量,根据所述第一偏移量,驱动所述云台转动;
    若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第二偏移量,根据所述第二偏移量,驱动所述底盘转动;其中,所述第二偏移量大于所述第一偏移量。
  9. 根据权利要求7或8所述的拍摄方法,其特征在于,所述预设构图条件指示的相对位置,包括:
    所述成像对象的成像区域处于所述取景框横向的中心;
    且,所述成像对象的成像区域不低于所述取景框纵向的预设高度。
  10. 根据权利要求6-9任一项所述的拍摄方法,其特征在于,所述当根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄 之前,还包括:
    当获取到拍照指令时,根据所述相对位置和所述空间距离,判断所述取景画面是否符合预设构图条件;
    若判断出所述相对位置不符合所述预设构图条件,根据所述成像对象的成像区域相对构图框的偏移量,驱动所述智能设备的底盘和云台中的至少一个移动,直至所述成像对象的成像区域处于所述构图框内;
    若判断出所述空间距离不符合所述预设构图条件,输出提示信息,并继续识别所述空间距离,直至所述空间距离属于所述预设构图条件指示的空间距离范围。
  11. 根据权利要求10所述的拍摄方法,其特征在于,所述获取到拍照指令,包括:
    根据最近采集到的预设个数的取景画面之间的相似性,确定所述成像对象处于静止状态的情况下,生成所述拍照指令。
  12. 根据权利要求6-11任一项所述的拍摄方法,其特征在于,所述识别所述成像对象与所述智能设备之间的空间距离,包括:
    根据所述成像区域的高度与所述成像对象的实际高度之间的比例关系以及图像传感器的焦距,确定所述成像对象与所述智能设备之间的空间距离;其中,所述图像传感器用于所述智能设备采集所述取景画面;
    或者,根据所述智能设备的深度摄像头采集到的深度数据,确定所述成像对象与所述智能设备之间的空间距离。
  13. 根据权利要求6-12任一项所述的拍摄方法,其特征在于,所述控制所述智能设备进行拍摄,包括:
    控制所述智能设备连续拍摄至少两帧图像;
    所述控制所述智能设备进行拍摄之后,还包括:
    根据图像质量,从所述至少两帧图像中选取用于预览展示的图像。
  14. 一种拍摄装置,其特征在于,包括:
    对象识别模块,用于获取智能设备视野范围内的第一画面,对所述第一画面进行焦点识别,确定成像对象;
    位置识别模块,用于获取覆盖智能设备周边环境的第二画面,从所述第二画面中识别出目标拍摄位置;
    引导模块,用于控制所述智能设备引导所述成像对象进入所述目标拍摄位置;
    拍摄模块,用于控制所述智能设备为所述成像对象进行拍摄。
  15. 根据权利要求14所述的拍摄装置,其特征在于,所述位置识别模块,具体用于:
    提取所述第二画面中的每个像素点的图像特征;
    根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;
    根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息,将所述第一位置信息作为所述目标拍摄位置。
  16. 根据权利要求14所述的拍摄装置,其特征在于,所述位置识别模块,具体用于:
    从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;
    根据所述目标区域内每个像素点在所述第二画面中的位置信息,确定所述目标区域在环境中的第二位置信息;
    将所述第二位置信息作为所述目标拍摄位置。
  17. 根据权利要求14所述的拍摄装置,其特征在于,所述位置识别模块,具体用于:
    从所述第二画面中识别不存在遮挡物的目标区域;其中,所述目标区域的面积大于或者等于预设的面积阈值;
    提取所述目标区域中的每个像素点的图像特征;
    根据每个像素点的图像特征,识别出所述图像特征满足预设的图像特征条件的至少一个第一像素点;
    根据所述第一像素点在所述第二画面中的位置信息,确定所述第一像素点在环境中的第一位置信息;
    将所述第一位置信息作为所述目标拍摄位置。
  18. 根据权利要求14-17任一项所述的拍摄装置,其特征在于,所述引导模块,具体用于:
    确定所述智能设备与所述目标拍摄位置的之间的位置关系;其中,所述位置关系包括所述智能设备与所述目标拍摄位置之间的空间距离、所述智能设备与所述目标拍摄位置之间的角度中的至少一个;
    根据所述位置关系,控制所述智能设备向所述目标拍摄位置移动;
    向所述成像对象发出跟随指令,引导所述成像对象进入所述目标拍摄位置。
  19. 根据权利要求14-18任一项所述的拍摄装置,其特征在于,所述拍摄模块,包括:
    获取单元,用于获取所述智能设备采集的取景画面;
    识别单元,用于识别成像对象的成像区域在所述取景画面中的相对位置,以及识别所述成像对象与所述智能设备之间的空间距离;
    拍摄单元,用于当根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄。
  20. 根据权利要求19所述的拍摄装置,其特征在于,所述拍摄模块,还包括:
    第一驱动单元,用于在识别成像对象的成像区域在所述取景画面中的相对位置之后,当所述相对位置未处于预设范围内时,根据所述相对位置,驱动所述智能设备的底盘和云台中的至少一个转动,以使所述成像对象的成像区域处于所述取景画面的预设范围内;
    其中,所述预设范围,包括取景框内,或者构图框内,或者所述取景框和所述构图框之间的交叠区域,或者所述取景框和所述构图框覆盖区域;其中,所述构图框,用于指示所述取景画面中符合所述预设构图条件指示的相对位置。
  21. 根据权利要求20所述的拍摄装置,其特征在于,所述第一驱动单元,具体用于:
    若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第一偏移量,根据所述第一偏移量,驱动所述云台转动;
    若所述取景画面中,所述成像对象的成像区域超出所述预设范围达到第二偏移量,根据所述第二偏移量,驱动所述底盘转动;其中,所述第二偏移量大于所述第一偏移量。
  22. 根据权利要求20或21所述的拍摄装置,其特征在于,所述预设构图条件指示的相对位置,包括:
    所述成像对象的成像区域处于所述取景框横向的中心;
    且,所述成像对象的成像区域不低于所述取景框纵向的预设高度。
  23. 根据权利要求19-22任一项所述的拍摄装置,其特征在于,所述拍摄模块,还包括:判断单元、第二驱动单元和提示单元;
    所述判断单元,用于当获取到拍照指令时,根据所述相对位置和所述空间距离,确定所述取景画面符合预设构图条件时,控制所述智能设备进行拍摄之前,当获取到拍照指令时,根据所述相对位置和所述空间距离,判断所述取景画面是否符合预设构图条件;
    第二驱动单元,用于若判断出所述相对位置不符合所述预设构图条件,根据所述成像对象的成像区域相对构图框的偏移量,驱动所述智能设备的底盘和云台中的至少一个移动,直至所述成像对象的成像区域处于所述构图框内;
    提示单元,用于若判断出所述空间距离不符合所述预设构图条件,输出提示信息,并返回所述识别单元继续识别所述空间距离,直至所述空间距离属于所述预设构图条件指示的空间距离范围。
  24. 根据权利要求23所述的拍摄装置,其特征在于,所述拍摄模块,还包括:指令生成单元;
    所述指令生成单元,用于根据最近采集到的预设个数的取景画面之间的相似性,确定所述成像对象处于静止状态的情况下,生成所述拍照指令。
  25. 根据权利要求19-24任一项所述的拍摄装置,其特征在于,所述识别单元,具体 用于:
    根据所述成像区域的高度与所述成像对象的实际高度之间的比例关系以及图像传感器的焦距,确定所述成像对象与所述智能设备之间的空间距离;其中,所述图像传感器用于所述智能设备采集所述取景画面;
    或者,根据所述智能设备的深度摄像头采集到的深度数据,确定所述成像对象与所述智能设备之间的空间距离。
  26. 根据权利要求19-25任一项所述的拍摄装置,其特征在于,所述拍摄单元,具体用于:
    控制所述智能设备连续拍摄至少两帧图像;
    所述控制所述智能设备进行拍摄之后,还包括:
    根据图像质量,从所述至少两帧图像中选取用于预览展示的图像。
  27. 一种智能设备,其特征在于,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-13中任一所述的拍摄方法。
  28. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-13任一所述的拍摄方法。
PCT/CN2019/078258 2018-03-21 2019-03-15 拍摄方法、装置、智能设备及存储介质 WO2019179357A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810236367.1A CN108737717A (zh) 2018-03-21 2018-03-21 拍摄方法、装置、智能设备及存储介质
CN201810236367.1 2018-03-21

Publications (1)

Publication Number Publication Date
WO2019179357A1 true WO2019179357A1 (zh) 2019-09-26

Family

ID=63941004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078258 WO2019179357A1 (zh) 2018-03-21 2019-03-15 拍摄方法、装置、智能设备及存储介质

Country Status (3)

Country Link
CN (1) CN108737717A (zh)
TW (1) TWI697720B (zh)
WO (1) WO2019179357A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941987A (zh) * 2019-10-10 2020-03-31 北京百度网讯科技有限公司 目标对象识别方法、装置、电子设备及存储介质
CN112929567A (zh) * 2021-01-27 2021-06-08 咪咕音乐有限公司 拍摄位置的确定方法、电子设备和存储介质
CN114737358A (zh) * 2022-03-31 2022-07-12 无锡小天鹅电器有限公司 衣物处理设备及其控制方法、联动控制系统及存储介质
CN115835005A (zh) * 2022-10-31 2023-03-21 泰康保险集团股份有限公司 引导用户拍摄的方法及装置、电子设备

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737717A (zh) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 拍摄方法、装置、智能设备及存储介质
CN109506108B (zh) * 2018-12-04 2024-05-28 南京乐拍时代智能科技有限公司 活动平台、自拍方法以及自拍系统
CN110516630A (zh) * 2019-08-30 2019-11-29 广东智媒云图科技股份有限公司 一种led显示屏作画方法、装置、设备及存储介质
CN112154656B (zh) * 2019-09-25 2022-10-11 深圳市大疆创新科技有限公司 一种拍摄方法和拍摄设备
CN112770044A (zh) * 2019-11-06 2021-05-07 北京沃东天骏信息技术有限公司 自拍图像的方法和装置
CN112807698B (zh) * 2020-12-31 2023-05-30 上海米哈游天命科技有限公司 拍摄位置的确定方法、装置、电子设备及存储介质
CN114727006A (zh) * 2021-01-06 2022-07-08 北京小米移动软件有限公司 图像拍摄方法和装置
TWI760189B (zh) * 2021-04-19 2022-04-01 微星科技股份有限公司 可移動式電子裝置及其控制方法
CN113743211B (zh) * 2021-08-02 2023-10-31 日立楼宇技术(广州)有限公司 一种扶梯视频监控系统、方法、装置及存储介质
CN113792580B (zh) * 2021-08-02 2023-11-03 日立楼宇技术(广州)有限公司 一种自动扶梯的辅助拍摄系统、方法、装置及存储介质
CN113824874A (zh) * 2021-08-05 2021-12-21 宇龙计算机通信科技(深圳)有限公司 辅助摄像方法、装置、电子设备及存储介质
CN117500120B (zh) * 2023-12-29 2024-03-15 深圳市正远科技有限公司 一种感应式led照明方法、系统以及智能库房

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2275864A1 (en) * 2009-07-08 2011-01-19 Sony Ericsson Mobile Communications Japan, Inc. Photographing device and photographing control method
CN104883497A (zh) * 2015-04-30 2015-09-02 广东欧珀移动通信有限公司 一种定位拍摄方法及移动终端
CN105007418A (zh) * 2015-07-03 2015-10-28 广东欧珀移动通信有限公司 一种拍照方法及移动终端
CN105516609A (zh) * 2016-01-29 2016-04-20 广东欧珀移动通信有限公司 拍照方法及装置
CN105827933A (zh) * 2015-06-29 2016-08-03 维沃移动通信有限公司 一种摄像方法、装置以及移动终端
CN106303195A (zh) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 拍摄设备及跟踪拍摄方法和系统
CN107438155A (zh) * 2016-05-27 2017-12-05 杨仲辉 智能图像拍摄方法
CN107509032A (zh) * 2017-09-08 2017-12-22 维沃移动通信有限公司 一种拍照提示方法及移动终端
CN107749952A (zh) * 2017-11-09 2018-03-02 睿魔智能科技(东莞)有限公司 一种基于深度学习的智能无人摄影方法和系统
CN108737717A (zh) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 拍摄方法、装置、智能设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100608596B1 (ko) * 2004-12-28 2006-08-03 삼성전자주식회사 얼굴 검출을 기반으로 하는 휴대용 영상 촬영 기기 및영상 촬영 방법
JP4779041B2 (ja) * 2009-11-26 2011-09-21 株式会社日立製作所 画像撮影システム、画像撮影方法、および画像撮影プログラム
KR20130094113A (ko) * 2012-02-15 2013-08-23 삼성전자주식회사 카메라 데이터 처리 장치 및 방법
JP2016510522A (ja) * 2012-12-28 2016-04-07 ヌビア テクノロジー カンパニー リミテッド 撮像装置及び撮像方法
CN104902172A (zh) * 2015-05-19 2015-09-09 广东欧珀移动通信有限公司 一种拍摄位置的确定方法及拍摄终端

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2275864A1 (en) * 2009-07-08 2011-01-19 Sony Ericsson Mobile Communications Japan, Inc. Photographing device and photographing control method
CN104883497A (zh) * 2015-04-30 2015-09-02 广东欧珀移动通信有限公司 一种定位拍摄方法及移动终端
CN106303195A (zh) * 2015-05-28 2017-01-04 中兴通讯股份有限公司 拍摄设备及跟踪拍摄方法和系统
CN105827933A (zh) * 2015-06-29 2016-08-03 维沃移动通信有限公司 一种摄像方法、装置以及移动终端
CN105007418A (zh) * 2015-07-03 2015-10-28 广东欧珀移动通信有限公司 一种拍照方法及移动终端
CN105516609A (zh) * 2016-01-29 2016-04-20 广东欧珀移动通信有限公司 拍照方法及装置
CN107438155A (zh) * 2016-05-27 2017-12-05 杨仲辉 智能图像拍摄方法
CN107509032A (zh) * 2017-09-08 2017-12-22 维沃移动通信有限公司 一种拍照提示方法及移动终端
CN107749952A (zh) * 2017-11-09 2018-03-02 睿魔智能科技(东莞)有限公司 一种基于深度学习的智能无人摄影方法和系统
CN108737717A (zh) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 拍摄方法、装置、智能设备及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941987A (zh) * 2019-10-10 2020-03-31 北京百度网讯科技有限公司 目标对象识别方法、装置、电子设备及存储介质
CN110941987B (zh) * 2019-10-10 2023-04-07 北京百度网讯科技有限公司 目标对象识别方法、装置、电子设备及存储介质
CN112929567A (zh) * 2021-01-27 2021-06-08 咪咕音乐有限公司 拍摄位置的确定方法、电子设备和存储介质
CN112929567B (zh) * 2021-01-27 2023-04-28 咪咕音乐有限公司 拍摄位置的确定方法、电子设备和存储介质
CN114737358A (zh) * 2022-03-31 2022-07-12 无锡小天鹅电器有限公司 衣物处理设备及其控制方法、联动控制系统及存储介质
CN114737358B (zh) * 2022-03-31 2023-11-03 无锡小天鹅电器有限公司 衣物处理设备及其控制方法、联动控制系统及存储介质
CN115835005A (zh) * 2022-10-31 2023-03-21 泰康保险集团股份有限公司 引导用户拍摄的方法及装置、电子设备

Also Published As

Publication number Publication date
CN108737717A (zh) 2018-11-02
TW201940953A (zh) 2019-10-16
TWI697720B (zh) 2020-07-01

Similar Documents

Publication Publication Date Title
WO2019179357A1 (zh) 拍摄方法、装置、智能设备及存储介质
WO2019179364A1 (zh) 拍摄方法、装置和智能设备
US10165199B2 (en) Image capturing apparatus for photographing object according to 3D virtual object
CN105554411B (zh) 一种基于屏幕补光的拍照方法、装置及移动终端
JP4196714B2 (ja) デジタルカメラ
TWI399084B (zh) 圖像捕捉方法和數位相機
CN103929596A (zh) 引导拍摄构图的方法及装置
US11210796B2 (en) Imaging method and imaging control apparatus
KR102407190B1 (ko) 영상 촬영 장치 및 그 동작 방법
US20050212913A1 (en) Method and arrangement for recording regions of interest of moving objects
US20050088542A1 (en) System and method for displaying an image composition template
JP2006211139A (ja) 撮像装置
WO2019104681A1 (zh) 拍摄方法和装置
CN109451240B (zh) 对焦方法、装置、计算机设备和可读存储介质
CN208459748U (zh) 一种摄影棚
JP2004320285A (ja) デジタルカメラ
CN114641983A (zh) 用于获得智能全景图像的系统及方法
WO2019084756A1 (zh) 一种图像处理方法、装置及飞行器
JP2013183185A (ja) 撮像装置、撮影制御方法及びプログラム
KR101094648B1 (ko) 구도결정을 하는 사진사 로봇 및 그 제어방법
KR20150014226A (ko) 전자 장치 및 전자 장치의 이미지 촬영 방법
CN106922181A (zh) 方向感知自动聚焦
JP2009290255A (ja) 撮像装置、および撮像装置制御方法、並びにコンピュータ・プログラム
TWI485505B (zh) 數位相機及數位相機之影像擷取方法
JP2019186791A (ja) 撮像装置、撮像装置の制御方法、および制御プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19770814

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19770814

Country of ref document: EP

Kind code of ref document: A1