CN109981967B - Shooting method and device for intelligent robot, terminal equipment and medium - Google Patents

Shooting method and device for intelligent robot, terminal equipment and medium Download PDF

Info

Publication number
CN109981967B
CN109981967B CN201711449710.2A CN201711449710A CN109981967B CN 109981967 B CN109981967 B CN 109981967B CN 201711449710 A CN201711449710 A CN 201711449710A CN 109981967 B CN109981967 B CN 109981967B
Authority
CN
China
Prior art keywords
shooting
picture
camera
intelligent robot
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711449710.2A
Other languages
Chinese (zh)
Other versions
CN109981967A (en
Inventor
熊友军
刘锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201711449710.2A priority Critical patent/CN109981967B/en
Publication of CN109981967A publication Critical patent/CN109981967A/en
Application granted granted Critical
Publication of CN109981967B publication Critical patent/CN109981967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The invention is suitable for the technical field of artificial intelligence, and provides a shooting method, a shooting device, terminal equipment and a shooting medium for an intelligent robot, wherein the method comprises the following steps: when a shooting event is triggered, starting a camera of the intelligent robot; carrying out image recognition on a shooting picture of the camera so as to determine the position of a shooting object in the shooting picture; controlling a camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving; the preset area is sixteen equal parts of the shooting picture, and golden section points of the shooting picture exist in the preset area. The invention realizes the automatic composition in the shooting process of the intelligent robot; the output image can have a better composition effect, the shooting quality of the picture can be prevented from being influenced when the user without the shooting basis controls the intelligent robot to shoot, and the stable output of the high-quality image is ensured.

Description

Shooting method and device for intelligent robot, terminal equipment and medium
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a shooting method and device for an intelligent robot, terminal equipment and a medium.
Background
In recent years, with the continuous progress of science and technology, mobile terminals have been greatly developed and popularized. Since mobile terminals are basically equipped with a shooting function and increasingly rich in shooting applications, users can be seen to execute shooting operations at any time in various life scenes.
However, various shooting applications are only present in mobile terminals or camera devices at present, and although computer vision technology and Artificial Intelligence (AI) technology are in a rapid development stage, a robot product as an AI carrier still does not have a shooting function; even if a camera is configured on the intelligent robot, the shooting operation of the intelligent robot can only depend on subjective judgment of an operator to perform manual framing, so the shooting technical level of the operator directly affects the final output effect of the picture. If the operator does not know how to take a picture with a better effect, the shooting quality of the picture is reduced, thereby preventing the intelligent robot from being popularized in the shooting field.
Disclosure of Invention
In view of this, embodiments of the present invention provide a shooting method, an apparatus, a terminal device, and a medium for an intelligent robot, so as to solve a problem that an existing intelligent robot cannot implement automatic composition in a shooting process.
A first aspect of an embodiment of the present invention provides a shooting method for an intelligent robot, including:
when a shooting event is triggered, starting a camera of the intelligent robot;
carrying out image recognition on a shooting picture of the camera so as to determine the position of a shooting object in the shooting picture;
controlling a camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving;
the preset area is sixteen equal parts of the shooting picture, and golden section points of the shooting picture exist in the preset area.
A second aspect of an embodiment of the present invention provides a photographing apparatus for an intelligent robot, including:
the starting unit is used for starting a camera of the intelligent robot when a shooting event is triggered;
the identification unit is used for carrying out image identification on a shooting picture of the camera so as to determine the position of a shooting object in the shooting picture;
the control unit is used for controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving;
the preset area is sixteen equal parts of the shooting picture, and golden section points of the shooting picture exist in the preset area.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the photographing method for an intelligent robot according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the photographing method for an intelligent robot as described in the first aspect when executing the computer program.
In the embodiment of the invention, the position of a shooting object in a shooting picture can be determined by carrying out image recognition on the shooting picture of the camera of the intelligent robot; the shooting picture is divided into sixteen equal parts, the sixteen equal parts including the golden section point are used as preset regions, the camera is controlled to move, and after the camera moves, the position of a shooting object in the shooting picture is ensured to be located in the preset regions, so that automatic composition of the intelligent robot in the shooting process is realized; because the golden section point is located in the preset area, the finally output image can have a better composition effect, the shooting quality of the picture can be prevented from being influenced when a user without a shooting basis controls the intelligent robot to shoot, and the stable output of the high-quality image is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a shooting method for an intelligent robot according to an embodiment of the present invention;
fig. 2 is a flowchart of a specific implementation of the photographing method S102 for the intelligent robot according to the embodiment of the present invention;
FIG. 3 is a schematic diagram of a rectangular detection box for enclosing a human face object according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a location area where a photographic subject is located according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a shot picture divided into sixteen equal parts according to an embodiment of the present invention;
fig. 6 is a flowchart of a specific implementation of the photographing method S103 for the intelligent robot according to the embodiment of the present invention;
fig. 7 is a flowchart of another specific implementation of the photographing method S103 for the intelligent robot according to the embodiment of the present invention;
fig. 8 is a block diagram of a camera for an intelligent robot according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, fig. 1 is a flowchart illustrating an implementation of a shooting method for an intelligent robot according to an embodiment of the present invention. The implementation flow shown in fig. 1 includes steps S101 to S103, and the implementation principle of each step is specifically as follows:
s101: and when the shooting event is triggered, starting the camera of the intelligent robot.
As a specific implementation example of the present invention, when a user clicks a physical key or a virtual key on the smart robot or draws a preset touch gesture on a touch screen of the smart robot, a shooting event is triggered.
As another specific implementation example of the present invention, the terminal device is wirelessly connected to a remote terminal device, and when receiving a shooting control command sent by the terminal device, a shooting event is triggered.
When a shooting event is triggered, a shooting preparation phase is entered to call a 'camera' application to the foreground for running. Specifically, a camera preset on the outer surface of the intelligent robot or inside the intelligent robot is started. Wherein, above-mentioned camera is used for shooing other scenes except that intelligent robot.
The image shot by the camera generates an optical image through a lens of the camera, the optical image is projected onto the sensor, the optical image is converted into an electric Signal, the electric Signal is converted into a Digital Signal through analog-to-Digital conversion, the Digital Signal is processed through Digital Signal Processing (DSP), and the Digital Signal is sent to the processor for Processing so as to obtain the shot image after imaging.
Preferably, as an embodiment of the present invention, the above S101 specifically includes: receiving a voice signal sent by a user; carrying out recognition processing on the voice signal to obtain a control instruction corresponding to the voice signal; and if the control instruction is a shooting instruction, triggering a shooting event of the intelligent robot so as to start a camera of the intelligent robot.
The embodiment of the invention is suitable for a scene that a user controls the intelligent robot to perform self-shooting. When the user selects a shooting scene, a voice signal can be sent to the intelligent robot, and the voice signal can be 'shoot me', for example. At this time, the intelligent robot analyzes and processes the received voice signal through a preset semantic recognition algorithm. And if the recognized keywords in the voice signal are the pre-recorded keywords corresponding to the shooting instruction, confirming that the voice shooting instruction is detected, and triggering a shooting event.
Preferably, two cameras are arranged on the intelligent robot.
It is worth noting that the intelligent robot may be provided with a display screen or not.
Preferably, the imaged image is output to a display screen of the intelligent robot to be a real-time preview screen visible to the user.
S102: and carrying out image recognition on the shooting picture of the camera so as to determine the position of the shooting object in the shooting picture.
In the embodiment of the invention, the shot picture obtained by the camera is detected in real time through a preset image recognition algorithm so as to determine the shot object in the shot picture. In the shooting process, a target shooting object needing focusing processing or tracking shooting processing is the shooting object. The image recognition algorithm includes, but is not limited to, an animal recognition algorithm, a face detection algorithm, and other algorithms for detecting a specific-shaped object.
As an embodiment of the present invention, fig. 2 shows a specific implementation flow of the photographing method S102 for an intelligent robot according to an embodiment of the present invention, which is detailed as follows:
s1021: and carrying out image recognition on the shot picture of the camera so as to detect each human face object in the shot picture.
In the embodiment of the invention, the shot picture of the camera is subjected to image recognition processing through a face detection algorithm. The face detection algorithm may be, for example, an AdaBoost face recognition algorithm, an OPENCV detection algorithm based on face features, and the like, which is not limited herein.
And determining the position area of each face object in the shot picture through a face detection algorithm prestored in the camera.
S1022: and respectively generating a rectangular detection frame of each face object, wherein the rectangular detection frame comprises four edge lines.
And generating a face detection frame surrounding the position area in a position area where each face object is positioned. It can be known that each face detection frame is used for marking a position area occupied by a face object in a shooting picture. The number of the generated face detection frames in the shooting picture is the same as the number of the detected face objects in the shooting picture.
Specifically, as shown in fig. 3, in the embodiment of the present invention, the face detection frame is a rectangular detection frame, and is surrounded by four edge lines connected end to end.
S1023: and determining a leftmost first edge line, a rightmost second edge line, a topmost third edge line and a bottommost fourth edge line in all the edge lines of the rectangular detection frames.
When N face objects are detected in the shot picture, since each face object is surrounded by one rectangular detection frame and each rectangular detection frame includes four edge lines, 4N edge lines will exist in the shot picture. Wherein N is an integer greater than zero.
The position coordinates of each edge line in the shooting picture are different. In the embodiment of the invention, the long side line of each rectangular detection frame is parallel to one side length of the shot picture, and the wide side line of each rectangular detection frame is parallel to the other side length of the shot picture. And determining a function expression corresponding to each edge in the shot picture according to the coordinate value of each position point on the edge.
For example, if an edge line AB parallel to the bottom edge of the shooting screen exists and the ordinate values of each position point on the edge line AB are all a, the edge line is determined to be a horizontal edge line, and the corresponding functional expression is that y is equal to a; if a sideline AD parallel to the wide side of the shot picture exists and the ordinate value of each position point on the sideline AD is b, the sideline is determined to be a vertical sideline, and the corresponding function expression is x ═ b.
And respectively determining one longitudinal side line with the maximum coefficient and the minimum coefficient from the 2N longitudinal side lines contained in the shot picture according to the coefficient b of the function expression corresponding to each longitudinal side line, wherein the longitudinal side line with the minimum coefficient is the leftmost first side line, and the longitudinal side line with the maximum coefficient is the rightmost second side line. Similarly, a transverse side line with the minimum coefficient and a transverse side line with the maximum coefficient in the function expression are respectively determined, the transverse side line with the maximum coefficient is the third side line of the uppermost side, and the transverse side line with the minimum coefficient is the fourth side line of the lowermost side.
S1024: determining an area surrounded by straight lines where the first side line, the second side line, the third side line and the fourth side line are located as a position where a shooting object is located in the shooting picture, and determining each face object contained in the area as the shooting object.
And respectively extending the first side line, the second side line, the third side line and the fourth side line to determine a closed area enclosed by each extended side line. And taking each face object contained in the whole closed area as a shooting object obtained by current detection.
Fig. 4 is a schematic diagram illustrating a location area where a photographic subject is located according to an embodiment of the present invention. As shown in fig. 4, if the closed region enclosed by the extended first edge line, second edge line, third edge line, and fourth edge line is an area ABCD, each object included in the area ABCD is determined as a shooting object at the current time, that is, the area ABCD is a currently detected shooting object as a whole.
In the embodiment of the invention, the rectangular detection frame of the face object is generated, and the shot object area containing each face object is determined according to the position distribution of the rectangular detection frame in the shooting interface, so that the robot can realize the automatic identification of the shot object in the subsequent shooting process. When a plurality of face objects are detected in a shooting picture, all face objects are confirmed to be an integral shooting object, so that all face objects can be shot into an imaged picture, and the problems that the shooting object is omitted and the shooting object identification error is generated due to the fact that only one face object is shot are avoided, and therefore the identification accuracy of the shooting object is improved.
S103: and controlling a camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving. The preset area is sixteen equal parts of the shooting picture, and golden section points of the shooting picture exist in the preset area.
Please refer to fig. 5. In the embodiment of the invention, a shot picture is divided into sixteen equal 4 × 4 rectangular areas, namely, an area a to an area P. Four golden section points of the photographed picture exist within the areas F, G, J and K, respectively. The regions F, G, J and K are referred to as one of the above-mentioned predetermined regions, respectively. The shot picture is divided into three equal parts in the vertical direction and the horizontal direction by using the four dividing lines, and the intersection point between the four dividing lines is a golden section point.
In the embodiment of the invention, before the shutter shooting operation is executed, the camera of the intelligent robot is controlled to move, so that the shooting visual angle of the camera is changed. Specifically, when the camera is embedded in the intelligent robot, the whole body of the intelligent robot is controlled to move; when the camera is arranged on the outer side of the body of the intelligent robot, the camera is controlled to rotate and/or the whole body of the intelligent robot is controlled to move. When the shooting visual angle of the camera is changed, the position of the shooting object in the shooting picture is changed in real time. And continuously detecting the real-time position of the shooting object in the shooting picture during the moving process of the camera.
And if the real-time position of the shooting object in the shooting picture is detected to be within the area F, G, J and/or K, controlling the camera to stop moving so as to keep the shooting object stable within the preset area.
Particularly, before the camera of the intelligent robot is controlled to move, if the real-time position of the shooting object in the shooting picture is detected to be within the preset area F, G, J and/or K, the camera is stopped from moving, and the shutter shooting operation is immediately executed.
In the embodiment of the invention, the position of a shooting object in a shooting picture can be determined by carrying out image recognition on the shooting picture of the camera of the intelligent robot; the shooting picture is divided into sixteen equal parts, the sixteen equal parts including the golden section point are used as preset regions, the camera is controlled to move, and after the camera moves, the position of a shooting object in the shooting picture is ensured to be located in the preset regions, so that automatic composition of the intelligent robot in the shooting process is realized; from the psychological perspective, the shooting object is placed on the golden section point to generate a focusing effect, so that a photo observer can obtain a coordinating feeling, therefore, the shooting object is placed in a preset area where the golden section point is located, the finally output image can have a better composition effect, the shooting quality of the photo is prevented from being influenced when the user who does not have a shooting basis controls the intelligent robot to shoot, and the stable output of a high-quality image is ensured.
As an embodiment of the present invention, as shown in fig. 6, the step S103 specifically includes:
s1031: and determining the priority corresponding to each of the sixteen equal parts of areas based on the position of the golden section point of the shot picture.
In the embodiment of the invention, the priority corresponding to each sixteen equal parts of areas in the shooting picture is determined according to the received priority setting instruction.
Preferably, in the shooting screen shown in fig. 5, the priority corresponding to the region F and the region G is four, the priority corresponding to the region J and the region K is three, the priority corresponding to the regions E, I, H and L is two, and the priority corresponding to the other regions is one. When the shooting object is positioned in the sixteen equal parts of areas with higher priority, the obtained composition effect is better.
S1032: and sequencing each sixteen equal parts according to the high-low sequence of the priority.
S1033: and determining the N sixteen equal parts which are sequenced at the front, wherein N is a preset value, and N is an integer which is larger than zero and smaller than sixteen.
According to the high-low sequence of the priority corresponding to each sixteen equal parts, the sixteen equal parts are sorted, so that the priority of the front sixteen equal parts is larger than that of the rear sixteen equal parts, and a plurality of front preset-number regions are extracted. It can be known that the priority of each extracted equal part area is higher than the priority of the rest of other equal part areas. Wherein 0< N <16, and N is an integer.
Preferably, in the embodiment of the present invention, the preset value N is 4, and the sixteen equal parts extracted at this time are the regions F, G, J and K.
S1034: and controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in the determined N sixteen equal areas, and stopping moving.
Before executing shutter shooting operation, judging whether the real-time position of a shooting object in a shooting picture is located in the extracted N sixteen equal areas, and if not, controlling a camera of the intelligent robot to move. And in the moving process, repeatedly executing the step of judging whether the real-time position of the shot object in the shot picture is positioned in the extracted N sixteen equal areas or not until the obtained judgment result shows that the real-time position of the shot object in the shot picture is positioned in the extracted N sixteen equal areas, pausing to control the camera to move, triggering the shutter shooting operation, and capturing and outputting the picture containing the current shot object.
Fig. 7 shows another specific implementation flow of the photographing method S103 for the intelligent robot according to an embodiment of the present invention, which is detailed as follows:
s1035: and determining the priority corresponding to each of the sixteen equal parts of areas based on the position of the golden section point of the shot picture.
In the embodiment of the invention, the central point of each sixteen equal areas in the shooting picture is determined, and the relative distance between each central point and the golden section point closest to the central point is calculated based on the position of each golden section point in the shooting picture.
S1036: and if the shot object contains more than one face object, controlling the camera of the intelligent robot to move until the number of the face objects in the sixteen equal parts of the area with the maximum priority is greater than the number of the face objects in each of the sixteen equal parts of the area, and stopping moving.
As an implementation example of the invention, a range interval to which the relative distance belongs is determined according to the relative distance between each sixteen equal parts of the area and a golden section point closest to the position of the area. And acquiring the corresponding priority of the sixteen equal parts according to the corresponding relation between the preset range interval and the priority, and identifying the sixteen equal parts with the maximum priority.
As another embodiment of the present invention, each sixteen equal parts area is sorted according to the magnitude order of the relative distance between each sixteen equal parts area and a golden section point closest to the position of each sixteen equal parts area, and the last sixteen equal parts area in the sorting order is determined as the sixteen equal parts area with the highest priority.
If the shot object detected in the shot picture is a single face object, the face object is moved to the sixteen equal parts of the area with the highest priority by controlling the camera on the intelligent robot to move.
And if the shot object detected in the shot picture comprises a plurality of face objects, controlling a camera on the intelligent robot to move so that the number of the face objects contained in the sixteen equal parts of the area with the maximum priority is larger than that of each equal part of the area with the non-maximum priority in the shot picture.
Further, if the number of the face objects contained in the sixteen equal parts with the maximum priority cannot be detected to be larger than that of each equal part with the non-maximum priority in the shot picture within the movable range of the camera, the camera of the intelligent robot is controlled to move repeatedly within the movable range again, so that the number of the face feature points of the face objects contained in the sixteen equal parts with the maximum priority is larger than that of the face feature points contained in each equal part with the non-maximum priority.
The embodiment of the invention is suitable for scenes with a plurality of face objects with larger intervals. For example, if the distance between the face object a and the face object B is greater than the frame width occupied by a sixteen-equal part of the area, but the distance between the face object a and the face object C is less than the frame width occupied by a sixteen-equal part of the area, after the camera is controlled to move, when the camera is detected to be in the area F shown in fig. 5, the camera is stopped to rotate.
Illustratively, if the shot object of the current shot picture only includes the face object a and the face object B whose intervals are greater than the picture width occupied by a sixteen equal part area, the number of the face feature points of the face object a and the number of the face feature points of the face object B are respectively calculated through a face detection algorithm. If the number of the face feature points of the face object a is greater than that of the face feature points of the face object B, after the camera is controlled to move, when the face object a is detected to be in the area F shown in fig. 5, the camera is stopped to move.
When the camera stops moving, the shutter is activated to perform an imaging shooting operation.
In the embodiment of the invention, the time when the camera stops moving is determined according to the priority order of the sixteen equal areas, and the position area of the shot object in the shot picture is determined, so that the shot object can be located at the optimal composition position point as far as possible, and the shooting effect with higher quality is achieved. When the shot object comprises a plurality of face objects, the number of the face objects contained in the sixteen equal parts with the maximum priority is larger than the number of the face objects contained in other regions, so that the face object with a large group can be used as a main shot object, and the probability of occurrence of scanning omission problem caused to the face object is reduced aiming at the face object needing to be shot by a user.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 8 shows a block diagram of a camera for a smart robot according to an embodiment of the present invention, which corresponds to the camera for a smart robot according to the foregoing embodiment, and only shows portions related to the embodiment of the present invention for convenience of description.
Referring to fig. 8, the apparatus includes:
and the starting unit 81 is used for starting the camera of the intelligent robot when the shooting event is triggered.
And the recognition unit 82 is used for carrying out image recognition on the shooting picture of the camera so as to determine the position of the shooting object in the shooting picture.
And the control unit 83 is used for controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving.
The preset area is sixteen equal parts of the shooting picture, and golden section points of the shooting picture exist in the preset area.
Optionally, the identification unit 82 includes:
and the detection subunit is used for carrying out image recognition on the shot picture of the camera so as to detect each human face object in the shot picture.
And the generating subunit is used for respectively generating a rectangular detection frame of each face object, and each rectangular detection frame comprises four edge lines.
And the first determining subunit is configured to determine, among all the edge lines of each of the rectangular detection frames, a leftmost first edge line, a rightmost second edge line, a topmost third edge line, and a bottommost fourth edge line.
And the second determining subunit is configured to determine, as the position of the photographic object in the photographic picture, an area surrounded by straight lines where the first edge line, the second edge line, the third edge line, and the fourth edge line are located, and determine, as the photographic object, each of the face objects included in the area.
Optionally, the control unit 83 includes:
and the third determining subunit is configured to determine, based on the position of the golden section point of the captured picture, a priority corresponding to each of the sixteen equal parts of the area.
And the sequencing subunit is used for sequencing each sixteen equal parts of areas according to the high-low sequence of the priority.
And the fourth determining subunit is configured to determine N sixteen equal parts of the regions sorted in the front, where N is a preset value, and N is an integer greater than zero and less than sixteen.
And the first control subunit is used for controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in the determined N sixteen equal areas, and stopping moving.
Optionally, the control unit 83 includes:
and the fifth determining subunit is configured to determine, based on the position of the golden section point of the captured picture, a priority corresponding to each of the sixteen equal parts of the area.
And the second control subunit is configured to, if the photographic object includes more than one face object, control the camera of the intelligent robot to move until the number of face objects in the sixteen equal parts of the area with the largest priority is greater than the number of face objects in each of the sixteen equal parts of the area, and stop moving.
Optionally, the starting unit 81 includes:
and the receiving subunit is used for receiving the voice signal sent by the user.
And the acquisition subunit is used for carrying out recognition processing on the voice signal so as to acquire a control instruction corresponding to the voice signal.
And the starting sub-unit is used for triggering the shooting event of the intelligent robot if the control instruction is a shooting instruction so as to start the camera of the intelligent robot.
In the embodiment of the invention, the position of a shooting object in a shooting picture can be determined by carrying out image recognition on the shooting picture of the camera of the intelligent robot; the shooting picture is divided into sixteen equal parts, the sixteen equal parts including the golden section point are used as preset regions, the camera is controlled to move, and after the camera moves, the position of a shooting object in the shooting picture is ensured to be located in the preset regions, so that automatic composition of the intelligent robot in the shooting process is realized; because the golden section point is located in the preset area, the finally output image can have a better composition effect, the shooting quality of the picture can be prevented from being influenced when a user without a shooting basis controls the intelligent robot to shoot, and the stable output of the high-quality image is ensured.
Fig. 9 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 9, the terminal device 9 of this embodiment includes: a processor 90, a memory 91 and a computer program 92 stored in said memory 91 and executable on said processor 90, such as a camera program for an intelligent robot. The processor 90, when executing the computer program 92, implements the steps in the above-described embodiments of the photographing method for the intelligent robot, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 90, when executing the computer program 92, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the units 81 to 83 shown in fig. 8.
Illustratively, the computer program 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 92 in the terminal device 9.
The terminal device 9 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 90, a memory 91. Those skilled in the art will appreciate that fig. 9 is only an example of a terminal device 9, and does not constitute a limitation to the terminal device 9, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device may also include an input-output device, a network access device, a bus, etc.
The Processor 90 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The memory 91 may also be an external storage device of the terminal device 9, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 9. Further, the memory 91 may also include both an internal storage unit and an external storage device of the terminal device 9. The memory 91 is used for storing the computer program and other programs and data required by the terminal device. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (9)

1. A shooting method for an intelligent robot, comprising:
when a shooting event is triggered, starting a camera of the intelligent robot;
carrying out image recognition on a shooting picture of the camera so as to determine the position of a shooting object in the shooting picture;
controlling a camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving;
the shot picture is divided into sixteen equal areas, and the area where the golden section point is located is the preset area;
the image recognition of the shot picture of the camera to determine the position of the shot object in the shot picture comprises the following steps:
carrying out image recognition on a shot picture of the camera so as to detect each human face object in the shot picture;
the control the camera of intelligent robot moves, when the position of shooting the object is located preset region in the picture of shooing, stops moving, includes:
determining the priority corresponding to each of the sixteen equal parts of areas based on the position of a golden section point of the shot picture;
and if the shot object contains more than one face object, controlling the camera of the intelligent robot to move until the number of the face objects in the sixteen equal parts of the area with the maximum priority is greater than the number of the face objects in each of the sixteen equal parts of the area, and stopping moving.
2. The shooting method according to claim 1, wherein the image recognition of the shot of the camera to determine the position of the shot object in the shot comprises: respectively generating a rectangular detection frame of each face object, wherein the rectangular detection frame comprises four edge lines;
determining a first edge line on the leftmost side, a second edge line on the rightmost side, a third edge line on the uppermost side and a fourth edge line on the lowermost side in all the edge lines of the rectangular detection frames;
determining an area surrounded by straight lines where the first side line, the second side line, the third side line and the fourth side line are located as a position where a shooting object is located in the shooting picture, and determining each face object contained in the area as the shooting object.
3. The photographing method of claim 1, wherein the controlling of the camera of the intelligent robot to move until the position of the photographic object in the photographing screen is in a preset area stops moving comprises:
determining the priority corresponding to each of the sixteen equal parts of areas based on the position of a golden section point of the shot picture;
according to the high-low sequence of the priority, sequencing each sixteen equal parts of areas;
determining N sixteen equal parts which are sequenced at the front, wherein N is a preset value and is an integer which is larger than zero and smaller than sixteen;
and controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in the determined N sixteen equal areas, and stopping moving.
4. The shooting method according to claim 1, wherein the starting of the camera of the intelligent robot when the shooting event is triggered comprises:
receiving a voice signal sent by a user;
carrying out recognition processing on the voice signal to obtain a control instruction corresponding to the voice signal;
and if the control instruction is a shooting instruction, triggering a shooting event of the intelligent robot so as to start a camera of the intelligent robot.
5. A camera device for an intelligent robot, comprising:
the starting unit is used for starting a camera of the intelligent robot when a shooting event is triggered;
the identification unit is used for carrying out image identification on a shooting picture of the camera so as to determine the position of a shooting object in the shooting picture;
the control unit is used for controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in a preset area, and stopping moving;
the shot picture is divided into sixteen equal areas, and the area where the golden section point is located is the preset area;
the identification unit includes:
the detection subunit is used for carrying out image recognition on the shot picture of the camera so as to detect each human face object in the shot picture;
the control unit includes:
a fifth determining subunit, configured to determine, based on a position of a golden section point of the captured picture, a priority corresponding to each of the sixteen equal parts;
and the second control subunit is configured to, if the photographic object includes more than one face object, control the camera of the intelligent robot to move until the number of face objects in the sixteen equal parts of the area with the largest priority is greater than the number of face objects in each of the sixteen equal parts of the area, and stop moving.
6. The photographing apparatus according to claim 5, wherein the recognition unit includes:
the detection subunit is used for carrying out image recognition on the shot picture of the camera so as to detect each human face object in the shot picture;
the generating subunit is configured to generate a rectangular detection frame for each face object, where the rectangular detection frame includes four edge lines;
the first determining subunit is configured to determine, among all the edge lines of each of the rectangular detection frames, a leftmost first edge line, a rightmost second edge line, a topmost third edge line, and a bottommost fourth edge line;
and the second determining subunit is configured to determine, as the position of the photographic object in the photographic picture, an area surrounded by straight lines where the first edge line, the second edge line, the third edge line, and the fourth edge line are located, and determine, as the photographic object, each of the face objects included in the area.
7. The photographing apparatus according to claim 5, wherein the control unit includes:
a third determining subunit, configured to determine, based on a position of a golden section point of the captured picture, a priority corresponding to each of the sixteen equal parts;
the sorting unit is used for sorting each sixteen equal parts according to the high-low sequence of the priority;
a fourth determining subunit, configured to determine N sixteen equal parts of the regions that are sorted in the front, where N is a preset value, and N is an integer greater than zero and less than sixteen;
and the first control subunit is used for controlling the camera of the intelligent robot to move until the position of the shooting object in the shooting picture is located in the determined N sixteen equal areas, and stopping moving.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201711449710.2A 2017-12-27 2017-12-27 Shooting method and device for intelligent robot, terminal equipment and medium Active CN109981967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711449710.2A CN109981967B (en) 2017-12-27 2017-12-27 Shooting method and device for intelligent robot, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711449710.2A CN109981967B (en) 2017-12-27 2017-12-27 Shooting method and device for intelligent robot, terminal equipment and medium

Publications (2)

Publication Number Publication Date
CN109981967A CN109981967A (en) 2019-07-05
CN109981967B true CN109981967B (en) 2021-06-29

Family

ID=67072519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711449710.2A Active CN109981967B (en) 2017-12-27 2017-12-27 Shooting method and device for intelligent robot, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN109981967B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111417064B (en) * 2019-12-04 2021-08-10 南京智芯胜电子科技有限公司 Audio-visual accompanying control method based on AI identification
CN114095642A (en) * 2020-07-31 2022-02-25 中兴通讯股份有限公司 Picture generation method, computer-readable storage medium and electronic device
WO2022037229A1 (en) * 2020-08-21 2022-02-24 海信视像科技股份有限公司 Human image positioning methods and display devices

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4444936B2 (en) * 2006-09-19 2010-03-31 富士フイルム株式会社 Imaging apparatus, method, and program
JP4902562B2 (en) * 2007-02-07 2012-03-21 パナソニック株式会社 Imaging apparatus, image processing apparatus, control method, and program
NO327899B1 (en) * 2007-07-13 2009-10-19 Tandberg Telecom As Procedure and system for automatic camera control
JP5246275B2 (en) * 2011-01-25 2013-07-24 株式会社ニコン Imaging apparatus and program
US9536132B2 (en) * 2011-06-24 2017-01-03 Apple Inc. Facilitating image capture and image review by visually impaired users
CN104883497A (en) * 2015-04-30 2015-09-02 广东欧珀移动通信有限公司 Positioning shooting method and mobile terminal
CN106131413B (en) * 2016-07-19 2020-04-14 纳恩博(北京)科技有限公司 Shooting equipment and control method thereof

Also Published As

Publication number Publication date
CN109981967A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN110866480B (en) Object tracking method and device, storage medium and electronic device
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109474780B (en) Method and device for image processing
US20170223261A1 (en) Image pickup device and method of tracking subject thereof
US20150201124A1 (en) Camera system and method for remotely controlling compositions of self-portrait pictures using hand gestures
CN109981967B (en) Shooting method and device for intelligent robot, terminal equipment and medium
CN103685940A (en) Method for recognizing shot photos by facial expressions
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN110084765B (en) Image processing method, image processing device and terminal equipment
CN107623819B (en) A kind of method taken pictures and mobile terminal and related media production
CN109981964B (en) Robot-based shooting method and shooting device and robot
CN112689221B (en) Recording method, recording device, electronic equipment and computer readable storage medium
CN110266946B (en) Photographing effect automatic optimization method and device, storage medium and terminal equipment
CN108200335A (en) Photographic method, terminal and computer readable storage medium based on dual camera
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
US20220329729A1 (en) Photographing method, storage medium and electronic device
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108259769B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114363522A (en) Photographing method and related device
CN109726613B (en) Method and device for detection
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109426823B (en) AR device photographing method and device and AR device
CN113691731B (en) Processing method and device and electronic equipment
CN108431867B (en) Data processing method and terminal
CN111818299B (en) Target identification method and device and photographing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Youbixuan Technology Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20231212

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.

TR01 Transfer of patent right