CN117714882A - Robot shooting method, device, electronic equipment and medium - Google Patents

Robot shooting method, device, electronic equipment and medium Download PDF

Info

Publication number
CN117714882A
CN117714882A CN202211086156.7A CN202211086156A CN117714882A CN 117714882 A CN117714882 A CN 117714882A CN 202211086156 A CN202211086156 A CN 202211086156A CN 117714882 A CN117714882 A CN 117714882A
Authority
CN
China
Prior art keywords
information
robot
shooting
shot object
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211086156.7A
Other languages
Chinese (zh)
Inventor
李东方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Robot Technology Co ltd
Original Assignee
Beijing Xiaomi Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Robot Technology Co ltd filed Critical Beijing Xiaomi Robot Technology Co ltd
Priority to CN202211086156.7A priority Critical patent/CN117714882A/en
Publication of CN117714882A publication Critical patent/CN117714882A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The invention provides a robot shooting method, a device, electronic equipment and a medium, and relates to the technical field of robots.

Description

Robot shooting method, device, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of robots, and in particular relates to a robot shooting method, a device, electronic equipment and a medium.
Background
With the development of robotics and photography, robots applied to photography have been frequently used. Currently, when shooting with a robot, a user typically manually selects a shooting position, and the robot shoots according to the selected position to obtain a high-quality image. There are still a number of inconveniences in using a robot for photographing, since the current robot cannot select a photographing position.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a robot shooting method, a device, an electronic apparatus, and a medium.
According to a first aspect of embodiments of the present disclosure, there is provided a robot shooting method, the method including: acquiring information of a shot object; determining a shooting position according to the information of the shot object; and controlling the robot to move to the shooting position to shoot.
Optionally, the information of the subject includes: at least one of sound information, face information, body information, and position information of the subject.
Optionally, the acquiring information of the subject includes: acquiring a depth image of the shot object acquired by a robot; and acquiring the position information of the shot object according to the depth image.
Optionally, the determining a shooting position according to the information of the shot object includes: determining an initial position according to the information of the shot object; when the shot object is located outdoors, acquiring current time information, geographic information of the shot object and the orientation of a camera of the robot; determining a sunlight irradiation direction according to the time information and the geographic information; and when the backlight shooting of the camera is determined according to the sunlight irradiation direction and the direction of the camera, adjusting the initial position to obtain the shooting position.
Optionally, the determining a shooting position according to the information of the shot object includes: and determining the shooting position according to the information of the shot object and preset shooting information, wherein the preset shooting information comprises a preset distance between a camera of the robot and the shot object and an angle formed by a connecting line between the camera and the shot object and a horizontal plane.
Optionally, the determining a shooting position according to the information of the shot object includes: acquiring input imaging requirement information; and determining the shooting position according to the information of the shot object and the imaging requirement information.
Optionally, the controlling the robot to move to the shooting position to shoot includes: acquiring the position of the robot, and planning a moving path according to the position of the robot and the shooting position; and when an obstacle exists on the moving path, moving to the shooting position to bypass the obstacle for shooting.
Optionally, the controlling the robot to move to the shooting position to shoot includes: determining environment information of the shot object; acquiring a photographing mode corresponding to the environment information; and controlling the robot to move to the shooting position, and shooting according to the shooting mode.
Optionally, the method further comprises: and deleting the shot image when the shot image does not meet the preset condition, and controlling the robot to shoot again.
According to a second aspect of embodiments of the present disclosure, there is provided a robot shooting apparatus, the apparatus including: the acquisition module is used for acquiring the information of the shot object; a determining module, configured to determine a shooting position according to information of the subject; and the shooting module is used for controlling the robot to move to the shooting position to shoot.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising: a storage device having at least one computer program stored thereon; at least one processing means for executing said at least one computer program in said storage means to carry out the steps of the method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the first aspect.
According to the robot shooting method, device, electronic equipment and medium, information of a shot object is acquired first, shooting positions are automatically determined according to the information of the shot object, and after the robot is controlled to move to the shooting positions, shooting is carried out on the shot object. The robot automatically selects the shooting position according to the information of the shot object, and images with better quality can be shot at the selected position without manually selecting or setting the shooting position by a user, thereby providing convenience for the user to use the robot for shooting.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a robot photographing method according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating sub-steps of step S120 of FIG. 1, according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating sub-steps of step S120 of FIG. 1, according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating sub-steps of step S130 of FIG. 1, according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating sub-steps of step S130 of FIG. 1, according to an exemplary embodiment;
FIG. 6 is a block diagram of a robotic camera shown according to an exemplary embodiment;
fig. 7 is a block diagram of an electronic device for a robot photographing method according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
When shooting by the current robot, the robot cannot automatically select the shooting position, so the shooting by the current robot still requires the user to select the shooting position, and the current shooting mode has a plurality of inconveniences.
To solve the above-mentioned problems, the present disclosure provides a robot photographing method that may be applied to a robot, a server connected to the robot, a robot photographing apparatus 100 shown in fig. 6, and an electronic device 800 shown in fig. 7, taking a robot (or a controller of the robot) as an example in this embodiment, wherein the robot may be an aerial robot (e.g., an unmanned aerial vehicle), an underwater robot, a ground robot (e.g., including a companion robot such as a wheeled robot, a foot robot, a robotic arm robot), etc., referring to fig. 1, the robot photographing method may include the steps of:
step S110, information of the subject is acquired.
The object to be photographed can be understood as an object that the robot needs to photograph. In one embodiment, a user sets a subject in advance. For example, a camera of the robot captures a picture and displays the picture on a display screen, which may be a display screen on the robot or a display screen on a mobile terminal communicatively connected to the robot. The user clicks the picture on the display screen, and after the clicked object on the picture is selected by the selected frame, the user confirms the selected frame. The object in the selected frame after the user confirms is a shot object.
In another embodiment, the robot may automatically select the subject. When a user wants to photograph a landscape, an animal, or other persons, the user can hold a camera, a mobile terminal, or other equipment with an image capturing function to capture images. However, when a user wants to take his own picture, especially when taking a whole body picture, it is often inconvenient to take his own picture. Thus, shooting can be performed using the robot of the present disclosure. The robot determines the shot object by reading the image in the storage space of the robot or reading the image on the mobile terminal connected with the robot and performing face recognition on the image with the face. For example, the robot performs face recognition on an image with a face, and the face with the largest occurrence number may be the owner of the robot or the mobile terminal, and the owner is determined as the photographed object.
The user touches the robot display screen to start a photographing application, or the robot acquires information corresponding to photographed objects after determining the photographed objects through connection with the robot. For example, the information of the subject may include at least one of the following or any combination of phases: sound information, location information, face information, body information, time information, environmental information (e.g., weather, light), etc.
For example, when the information of the object includes sound information, and the object corresponds to a person, the robot performs voice interaction with the person, and the sound pickup apparatus of the robot collects the sound information of the person in the interaction process. Alternatively, the robot may perform a voice query, and the sound pickup apparatus collects sound information around the robot, the sound information corresponding to the query voice being sound information of the subject. For example, the robot may ask "ask you ready" and collect sound information after asking, including "what is eaten at night today", "ready, please take a picture of me", sound information corresponding to the voice asking "ready, please take a picture of me" as a subject, and the rest is the interference voice of passers-by.
When the information of the shot object can comprise position information, firstly acquiring a depth image of the shot object acquired by a robot. And then acquiring the position information of the shot object according to the depth image. The position information can be understood as the coordinate position where the subject is located.
When the information of the shot object comprises face information, the robot can shoot in the moving process, and the shot picture is analyzed to obtain the face information.
The information of the shot object comprises human body information, and the robot scans the human body information through an infrared camera. Or a camera on the robot collects pictures and analyzes the human body information according to the picture information.
When the information of the object includes time information, the robot acquires time from a server connected to the robot or local as the time information corresponding to the object.
When the object includes environmental information, the robot may capture a picture through the camera, analyze the environmental information according to the picture, or acquire the environmental information transmitted by the server.
Step S120, determining a shooting position according to the information of the shot object.
The robot automatically selects a shooting position according to the information of the shot object. It can be understood that the robot can obtain a better image when shooting at the shooting position, that is, the robot can shoot the front face of the shot object when shooting at the shooting position, and can shoot the shot object as a whole when shooting the distance between the shooting position and the shot object, or the image shot at the distance can have a better composition.
Illustratively, the information of the subject includes: sound information, face information, body information, and position information of the subject. In the process of the robot interacting with the shot object, the robot collects the sound information of the shot object, and determines the azimuth of the shot object according to the sound information (the azimuth can be the azimuth of southeast, northwest, etc.). The robot collects human body information while moving along the azimuth expressed by the sound information, and determines the approximate position of the photographed object according to the human body information. And then controlling the camera to acquire images, moving according to the body part expressed by the images (for example, the body part is the back), and simultaneously continuously shooting and acquiring face information. After the face information is acquired, a depth image is shot, and more accurate position information of the shot object is acquired through the depth image. And comprehensively determining the shooting position through the sound information, the face information, the human body information and the position information of the shot object.
And step S130, controlling the robot to move to the shooting position to shoot.
After the robot is controlled to move to the target position, a camera on the robot is controlled to shoot a shot object at the target position, and an image with better quality is obtained, wherein the image can be a photo or a video image. The camera can be a camera of the robot. The camera may also be a separate camera, such as a camera of a mobile terminal, a camera, etc., and the camera-equipped device is clamped to the robot by means of clamping members on the robot.
In one embodiment, when the subject is a person (user), the robot may perform voice interaction with the subject, and after receiving a shooting instruction (which means that the robot carries voice information for starting shooting, for example, "start shooting") of the subject, the robot shoots the subject.
In another embodiment, a camera of the robot captures a picture in which a subject is located, and when the subject in the picture exhibits a preset state (e.g., a scissor hand, a smile, etc.), and the subject is at a preset position (e.g., a middle position of the picture), the subject is photographed. Or if the motion of the object in the screen is changed, the object is photographed. If the subject is not at a preset position, for example, the subject is at an upper left corner, a lower left corner, an upper right corner, or a lower right corner of the screen, or the like, where the subject cannot be completely photographed, the camera is adjusted so that the subject is at the preset position in the screen acquired by the camera.
The embodiment provides a robot shooting method, which comprises the steps of firstly acquiring information of a shot object, automatically determining a shooting position according to the information of the shot object, controlling a robot to move to the shooting position, and shooting the shot object. The robot automatically selects the shooting position according to the information of the shot object, and a better image can be shot at the selected position without manually selecting or setting the shooting position by a user, thereby providing convenience for the user to use the robot for shooting.
In one embodiment, referring to fig. 2, step S120 includes the following sub-steps:
substep S121, determining an initial position according to the information of the subject.
The initial position is determined from the information of the subject. It can be understood that the initial position is a photographing position where the subject is preferable in the current state, and at the initial position, the robot can photograph the front face of the subject.
Sub-step S122, when the subject is located outdoors, acquires current time information, geographical information of the subject, and orientation of a camera of the robot.
The camera on the robot can collect the environmental picture, discern the environmental picture collected, obtain the recognition result, the recognition result characterizes the shot object is in outdoor or indoor. When the subject is in a room, the indoor light source may not be affected by seasons, positions and other factors, and the camera directly shoots the subject at the shooting position to obtain a shooting image.
When the subject is outdoors, the quality of the photographed image may be affected by sunlight. For example, a subject may be unclear on an image photographed at the time of backlight. In order to determine whether the robot is photographed in a backlight at the initial position, current time information and geographic information of the photographed object need to be acquired. For example, the current time information may be in the morning, afternoon or evening, or may be specific time of day such as 12:00, 14:00, etc. The geographic information of the shot object can be specific longitude and latitude or latitude of the shot object, and can also be a southern hemisphere or a northern hemisphere. The orientation of the camera of the robot can also be obtained. For example, the orientation of the camera of the robot may be the southwest, northwest, or the like, and alternatively, the orientation of the camera of the robot may be obtained by a geomagnetic sensor.
Substep S123, determining a sunlight irradiation direction according to the time information and the geographic information.
And establishing a corresponding relation among the time information, the geographic information and the sunlight irradiation direction. Based on the correspondence, a sunlight irradiation direction corresponding to the time information and the geographic information is determined.
For example, the time information is 14:00, the geographic information is a northern hemisphere, and the sunlight irradiation direction corresponding to the 14:00 northern hemisphere is northwest based on the correspondence.
When the camera non-backlighting shooting is determined according to the sunlight irradiation direction and the camera direction, for example, the non-backlighting shooting is the camera forward lighting or side lighting shooting, if the image obtained by shooting can show details more completely, the whole picture is clear and bright, the initial position is determined as the shooting position, and the camera of the robot is controlled to shoot at the position, so that the image with higher quality can be obtained.
And step S124, when the backlight shooting of the camera is determined according to the sunlight irradiation direction and the direction of the camera, the initial position is adjusted to obtain the shooting position.
When determining the camera backlighting according to the sunlight irradiation direction and the camera orientation, if shooting is performed at the initial position, the shot image is not clear and the details are less, so that the determined shooting position (initial position) needs to be adjusted to the non-backlighting position, i.e. the shooting position. It can be appreciated that a higher quality image can be obtained when the camera of the robot is photographed at the photographing position.
In one embodiment, the initial position adjustment may be performed by, for example, determining the initial position based on the information of the subject, where the distance between the initial position and the subject may be a preferable photographing distance at which the whole of the subject (for example, photographing the whole body) may be photographed. Therefore, the photographed object can be used as a circle center, the distance between the initial position and the photographed object is used as a radius to form a circle, and the circle is moved along the initial position to reach the photographing position along the preset radian on the circular arc, wherein the preset radian can be a smaller radian. For example, the initial position is opposite to the subject, and if the photographing is performed at the initial position, the front face of the subject can be photographed, but the photographed image has less details due to the influence of backlight. The side face or the front face of the shot object can be shot at the adjusted shooting position, the shooting effect is not affected, and the whole shot object can still be shot because the shooting position still keeps a better shooting distance with the shot object, and the problem of unclear caused by backlight shooting is avoided at the shooting position.
In another embodiment, the adjustment of the initial position may be performed by, when determining that the camera is backlit according to the sunlight irradiation direction and the orientation of the camera, the robot interacting with the object in order to prompt the object to adjust the orientation, the robot collecting the information of the object again, obtaining the photographing position, adjusting the original initial position to the photographing position, and photographing the object to be photographed at the photographing position.
In another embodiment, step S120 includes the following: and determining the shooting position according to the information of the shot object and preset shooting information, wherein the preset shooting information comprises a preset distance between a camera of the robot and the shot object and an angle formed by a connecting line between the camera and the shot object and a horizontal plane.
In one embodiment, the user presets the preset photographing information according to the usage habit. The preset shooting information includes a distance between the camera and the shot object, for example, a user is used to shoot a half body shot, the distance may be a smaller value, and the user is used to shoot a whole body shot, the distance is a larger value. The preset shooting information also comprises an angle formed by a connecting line between the camera and the shot object and a horizontal plane. The angle in the preset shooting information is set according to shooting habits such as 45-degree angle or front shooting of a user.
In another embodiment, the robot reads a local album or reads an album of a mobile terminal connected with the robot, recognizes a photographing habit of an image acquisition user in the album, and automatically sets the preset photographing information according to the photographing habit.
And determining the shooting position according to the information of the shot object and preset shooting information. The shooting position accords with shooting habit of the user, and an image expected by the user can be obtained.
In another embodiment, referring to fig. 3, step S120 includes the following sub-steps:
substep S125, acquires the input imaging requirement information.
The imaging requirement information may be a requirement for taking a half body photograph or a requirement for taking a whole body photograph.
In one embodiment, the imaging requirements may be user entered. As one way, the user indicates imaging requirement information by voice while interacting with the robot semantics. As another way, the user may operate the display screen of the robot, and imaging demand information is generated based on the operation. As another way, the user may input imaging requirement information through a mobile terminal connected to the robot, and the mobile terminal transmits the imaging requirement information to the robot.
In another embodiment, the robot reads a local album or reads an album of a mobile terminal connected with the robot, recognizes a photographing habit of an image acquisition user in the album, and automatically sets imaging requirement information according to the photographing habit. It can be understood that the photographing habit of the user is habit of photographing half body, and the imaging requirement information of the user is determined as photographing half body requirement according to the habit.
Substep S126, determining the shooting position according to the information of the shot object and the imaging requirement information.
And determining the shooting position according to the information of the shot object and the imaging requirement information, and shooting an image expected by a user at the position.
In capturing a subject, referring to fig. 4, step S130 includes the following sub-steps:
and a substep S131, acquiring the position of the robot, and planning a moving path according to the position of the robot and the shooting position.
And step S132, when an obstacle exists on the moving path, moving to the shooting position around the obstacle to shoot.
The robot detects obstacles existing on the path through an ultrasonic radar detector, an infrared detector, a camera and the like. When the height of the protruding object is larger than the preset height, the robot cannot cross the protruding object to confirm that the object is an obstacle, and the robot needs to bypass. When the protruding object height is lower than the preset height, the robot can directly cross the protruding object, and the object does not serve as an obstacle. When the depth of the sunken object is larger than the preset depth, the robot cannot cross the sunken object, the object is confirmed to be an obstacle, and the robot needs to bypass. When the depth of the recessed object is lower than the preset depth, the robot can directly span the recessed object, and the object does not serve as an obstacle.
When the robot encounters an obstacle incapable of bypassing, the robot can perform voice communication with the shot object, prompt the shot object to assist the robot to bypass, or prompt the shot object to adjust the shooting position, and then the robot determines the shooting position again. If the robot cannot bypass and the shot object does not bypass in an auxiliary mode, the voice prompt fails to go to the shooting position, and after the prompt fails to go to the shooting position for a preset time, the robot automatically exits shooting to save the electric energy of the robot.
Optionally, the obstacle can be shot by a camera of the robot, the shot image is used for identifying the obstacle, for example, a step, a puddle, a dip, a person or an animal is identified, and the obstacle is further bypassed to move to the shooting position.
In another embodiment, referring to fig. 5, step S130 includes the following sub-steps:
and a substep S133, determining the environment information of the shot object.
The environmental information may include weather information, brightness information, an environmental type, and the like.
When the environmental information includes weather information, the robot acquires the weather information from a cloud server connected thereto.
When the environment information includes brightness information, the robot may collect the brightness information through its own light sensor. Luminance information may also be inferred from time by obtaining the current time.
When the environment information comprises the environment type, the robot can acquire an image of the environment where the robot is located through the camera, and the environment type is acquired according to the environment image. The robot can also acquire the current position of the robot through the positioning module, and acquire the environment type (such as forest, city, country and the like) corresponding to the current position from the server.
And step S134, acquiring a photographing mode corresponding to the environment information.
And pre-establishing a corresponding relation between the environment information and the photographing modes, acquiring the photographing modes corresponding to the environment information according to the corresponding relation, and starting the photographing modes. For example, if the environment information is characterized as a dark environment, the photographing mode corresponding to the environment information is a night photographing mode. When the environment information characterizes that people exist in the environment, the photographing mode is a beautifying mode.
And S135, controlling the robot to move to the shooting position to shoot according to the shooting mode.
Optionally, after the robot shoots the shot object to obtain an image, when the shot image does not meet the preset condition, deleting the shot image to reduce the occupation of the memory, and controlling the robot to shoot again to obtain the image meeting the preset condition.
In one embodiment, the preset condition includes a preset definition, and when the definition of the shot image is lower than the preset definition, it is indicated that the shot object may move when shooting, the shot object does not occupy the memory in accordance with the image expected by the user, and after deleting the shot image, the shot object is shot again.
In another embodiment, the preset condition includes a preset position (may be an intermediate position of one image), when the shot image is not at the preset position, the shot image is deleted, and after the camera is adjusted, the shot object is re-shot to obtain the image expected by the user. Wherein the subject may be a user.
In another embodiment, the preset condition includes that the preset portion is in a preset state, for example, the preset portion is an eye, and the preset state is when the eye is open. When eyes in the shot image are not in an open eye state (the eyes are in a closed eye state), namely the shot image does not accord with preset conditions, deleting the image, and shooting a new image again.
Optionally, after capturing the obtained multiple images, deleting similar images in the multiple images to reduce memory occupation of the similar images. As one way, the pixel information of a plurality of images is acquired, at least two images with similar pixel information are determined to be similar images, one image is reserved in the at least two similar images, and the rest images are deleted, so that the occupation of the memory is reduced.
To implement the above method embodiments, the present embodiment provides a robot shooting device, referring to fig. 6, the robot shooting device 100 includes: the device comprises an acquisition module 110, a determination module 120 and a shooting module 130.
An acquisition module 110 for acquiring information of a subject;
a determining module 120, configured to determine a shooting position according to the information of the subject;
and the shooting module 130 is used for controlling the robot to move to the shooting position to shoot.
Optionally, the information of the subject includes: at least one of sound information, face information, body information, and position information of the subject.
Optionally, the determining module 120 includes: the device comprises a depth image acquisition module and a first determination module.
And the depth image acquisition module is used for acquiring the depth image of the shot object acquired by the robot.
And the first determining module is used for acquiring the position information of the shot object according to the depth image.
Optionally, the determining module 120 includes: the system comprises an initial position determining module, an outdoor acquisition module, an orientation determining module and a second determining module.
An initial position determining module, configured to determine an initial position according to information of the subject;
the outdoor acquisition module is used for acquiring current time information, geographic information of the shot object and the direction of a camera of the robot when the shot object is located outdoors;
the orientation determining module is used for determining the sunlight irradiation direction according to the time information and the geographic information;
and the second determining module is used for adjusting the initial position to obtain the shooting position when the camera backlighting shooting is determined according to the sunlight irradiation direction and the direction of the camera.
Optionally, the determining module 120 includes: and a third determination module.
And the third determining module is used for determining the shooting position according to the information of the shot object and preset shooting information, wherein the preset shooting information comprises a preset distance between a camera of the robot and the shot object and an angle formed by a connecting line between the camera and the shot object and a horizontal plane.
Optionally, the determining module 120 includes: an imaging requirement information acquisition module and a fourth determination module.
The imaging requirement information acquisition module is used for acquiring the input imaging requirement information;
and a fourth determining module, configured to determine the shooting position according to the information of the subject and the imaging requirement information.
Optionally, the photographing module 130 includes: the system comprises a planning module and a first shooting module.
The planning module is used for acquiring the position of the robot and planning a moving path according to the position of the robot and the shooting position;
and the first shooting module is used for bypassing the obstacle to move to the shooting position for shooting when the obstacle exists on the moving path.
Optionally, the photographing module 130 includes: the device comprises an environment information determining module, a photographing mode obtaining module and a second photographing module.
The environment information determining module is used for determining environment information of the shot object;
the shooting mode acquisition module is used for acquiring a shooting mode corresponding to the environment information;
and the second shooting module is used for controlling the robot to move to the shooting position and shooting according to the shooting mode.
Optionally, the robot photographing device 100 further includes: and a re-shooting module.
And the re-shooting module is used for deleting the shot image and controlling the robot to re-shoot when the shot image does not meet the preset condition.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the robot shooting method provided by the present disclosure.
Fig. 7 is a block diagram of an electronic device 800 for a robot shooting method, according to an exemplary embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 7, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
Input/output interface 812 provides an interface between processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of electronic device 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
The apparatus may be a stand-alone electronic device or may be part of a stand-alone electronic device, such as an integrated circuit (Integrated Circuit, IC) or chip, in one embodiment, where the integrated circuit may be an IC or a collection of ICs; the chip may include, but is not limited to, the following: GPU (Graphics Processing Unit, graphics processor), CPU (Central Processing Unit ), FPGA (Field Programmable Gate Array, programmable logic array), DSP (Digital Signal Processor ), ASIC (Application Specific Integrated Circuit, application specific integrated circuit), SOC (System on Chip, SOC, system on Chip or System on Chip), etc. The integrated circuit or the chip may be used to execute executable instructions (or codes) to implement the robot shooting method. The executable instructions may be stored on the integrated circuit or chip or may be retrieved from another device or apparatus, such as the integrated circuit or chip including a processor, memory, and interface for communicating with other devices. The executable instructions may be stored in the memory, which when executed by the processor, implement the robot capture method described above; or the integrated circuit or the chip can receive the executable instructions through the interface and transmit the executable instructions to the processor for execution so as to realize the robot shooting method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described robot shooting method when being executed by the programmable apparatus.
In summary, the method, the device, the electronic device and the medium for shooting a robot of the present disclosure acquire information of a subject, automatically determine a shooting position according to the information of the subject, control the robot to move to the shooting position, and then shoot the subject. The robot automatically selects the shooting position according to the information of the shot object, and a better image can be shot at the selected position without manually selecting or setting the shooting position by a user, thereby providing convenience for the user to use the robot for shooting.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A robotic photographing method, the method comprising:
acquiring information of a shot object;
determining a shooting position according to the information of the shot object;
and controlling the robot to move to the shooting position to shoot.
2. The method of claim 1, wherein the subject information comprises: at least one of sound information, face information, body information, and position information of the subject.
3. The method according to claim 1 or 2, wherein the acquiring information of the subject includes:
acquiring a depth image of the shot object acquired by a robot;
and acquiring the position information of the shot object according to the depth image.
4. The method according to claim 1, wherein the determining a shooting position from the information of the subject includes:
determining an initial position according to the information of the shot object;
when the shot object is located outdoors, acquiring current time information, geographic information of the shot object and the orientation of a camera of the robot;
determining a sunlight irradiation direction according to the time information and the geographic information;
and when the backlight shooting of the camera is determined according to the sunlight irradiation direction and the direction of the camera, adjusting the initial position to obtain the shooting position.
5. The method according to claim 1, wherein the determining a shooting position from the information of the subject includes:
and determining the shooting position according to the information of the shot object and preset shooting information, wherein the preset shooting information comprises a preset distance between a camera of the robot and the shot object and an angle formed by a connecting line between the camera and the shot object and a horizontal plane.
6. The method according to claim 1, wherein the determining a shooting position from the information of the subject includes:
acquiring input imaging requirement information;
and determining the shooting position according to the information of the shot object and the imaging requirement information.
7. The method of claim 6, wherein the controlling the robot to move to the photographing position for photographing comprises:
acquiring the position of the robot, and planning a moving path according to the position of the robot and the shooting position;
and when an obstacle exists on the moving path, moving to the shooting position to bypass the obstacle for shooting.
8. The method of claim 1, wherein the controlling the robot to move to the photographing position for photographing comprises:
determining environment information of the shot object;
acquiring a photographing mode corresponding to the environment information;
and controlling the robot to move to the shooting position, and shooting according to the shooting mode.
9. The method as recited in claim 1, further comprising:
and deleting the shot image when the shot image does not meet the preset condition, and controlling the robot to shoot again.
10. A robotic photographing apparatus, the apparatus comprising:
the acquisition module is used for acquiring the information of the shot object;
a determining module, configured to determine a shooting position according to information of the subject;
and the shooting module is used for controlling the robot to move to the shooting position to shoot.
11. An electronic device, comprising:
a storage device having at least one computer program stored thereon;
at least one processing means for executing said at least one computer program in said storage means to carry out the steps of the method according to any one of claims 1-9.
12. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-9.
CN202211086156.7A 2022-09-06 2022-09-06 Robot shooting method, device, electronic equipment and medium Pending CN117714882A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211086156.7A CN117714882A (en) 2022-09-06 2022-09-06 Robot shooting method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211086156.7A CN117714882A (en) 2022-09-06 2022-09-06 Robot shooting method, device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117714882A true CN117714882A (en) 2024-03-15

Family

ID=90155713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211086156.7A Pending CN117714882A (en) 2022-09-06 2022-09-06 Robot shooting method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117714882A (en)

Similar Documents

Publication Publication Date Title
US20170064181A1 (en) Method and apparatus for controlling photography of unmanned aerial vehicle
KR101725533B1 (en) Method and terminal for acquiring panoramic image
CN113542581A (en) View finding method of multi-channel video, graphical user interface and electronic equipment
RU2665304C2 (en) Method and apparatus for setting photographing parameter
EP3352453A1 (en) Photographing method for intelligent flight device and intelligent flight device
CN108040204B (en) Image shooting method and device based on multiple cameras and storage medium
US10191708B2 (en) Method, apparatrus and computer-readable medium for displaying image data
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN112188089A (en) Distance acquisition method and device, focal length adjustment method and device, and distance measurement assembly
CN113364965A (en) Shooting method and device based on multiple cameras and electronic equipment
CN110995993A (en) Star track video shooting method, star track video shooting device and storage medium
CN107247535B (en) Intelligent mirror adjusting method and device and computer readable storage medium
EP4304190A1 (en) Focus chasing method, electronic device, and storage medium
EP3926944A1 (en) Method and apparatus for displaying captured preview image, and medium
CN117714882A (en) Robot shooting method, device, electronic equipment and medium
CN110572582B (en) Shooting parameter determining method and device and shooting data processing method and device
CN114339022B (en) Camera shooting parameter determining method and neural network model training method
CN114422687B (en) Preview image switching method and device, electronic equipment and storage medium
CN114187874B (en) Brightness adjusting method, device and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN116980766A (en) Image shooting method, device, terminal and storage medium
CN116939351A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN117666857A (en) Interaction method, interaction device, electronic equipment and readable storage medium
CN117499776A (en) Shooting method, shooting device, electronic equipment and storage medium
CN117041728A (en) Focusing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination