CN115706854A - Camera control method and device for foot type robot and foot type robot - Google Patents

Camera control method and device for foot type robot and foot type robot Download PDF

Info

Publication number
CN115706854A
CN115706854A CN202110900113.7A CN202110900113A CN115706854A CN 115706854 A CN115706854 A CN 115706854A CN 202110900113 A CN202110900113 A CN 202110900113A CN 115706854 A CN115706854 A CN 115706854A
Authority
CN
China
Prior art keywords
target object
camera
legged robot
camera control
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110900113.7A
Other languages
Chinese (zh)
Inventor
豆子飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110900113.7A priority Critical patent/CN115706854A/en
Publication of CN115706854A publication Critical patent/CN115706854A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure relates to a camera control method and device of a foot type robot and the foot type robot, and belongs to the technical field of robots. The camera control method of the foot type robot comprises the following steps: acquiring a working mode of the foot type robot, and acquiring a target object in the working mode and a recognition precision requirement corresponding to the target object; acquiring a target object image of a target object currently shot by a camera of the legged robot; judging whether the target object image meets the identification precision requirement or not; if the identification precision requirement is not met, the focal length of the camera is adjusted, and the method automatically judges whether the image of the target object shot by the camera currently meets the identification precision requirement or not, and adjusts the focal length of the camera when the image of the target object does not meet the identification precision requirement, so that the target object can be effectively identified, and the user experience is improved.

Description

Camera control method and device for foot type robot and foot type robot
Technical Field
The present disclosure relates to the field of robot technologies, and in particular, to a camera control method and apparatus for a foot robot, and a foot robot.
Background
With the continuous development of robots, the application of robots to identify targets is widely applied in various fields. However, in the process of identifying the target, the distance between the target and the robot is too far or too close, so that the robot is difficult to effectively identify the target, the target identification fails, and the user experience is poor.
Disclosure of Invention
The present disclosure provides a camera control method and apparatus for a foot robot, an electronic device, a computer readable storage medium, and a computer program product, so as to solve at least the problems in the above technologies that, in the process of identifying a target by a robot, the distance between the target and the robot is too far or too close, the robot is difficult to effectively identify the target, and therefore, the target identification fails and the user experience is poor. The technical scheme of the disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a camera control method of a legged robot, including: acquiring a working mode of the foot type robot, and acquiring a target object in the working mode and an identification precision requirement corresponding to the target object; acquiring a target object image of a target object currently shot by a camera of the foot robot; judging whether the target object image meets the identification precision requirement or not; and if the identification precision requirement is not met, adjusting the focal length of the camera.
In an embodiment of the present disclosure, the adjusting the focal length of the camera includes: acquiring a first resolution of the target object image; acquiring a second resolution meeting the identification precision requirement; adjusting a focal length of the camera according to a resolution difference between the first resolution and the second resolution.
In an embodiment of the present disclosure, after the adjusting the focal length of the camera, the method further includes: if the identification precision requirement is still not met, acquiring the current distance between the foot type robot and a target object; and controlling the distance between the foot type robot and the target object according to the current distance until the identification precision requirement is met.
In one embodiment of the present disclosure, the obtaining of the working mode of the legged robot includes: receiving a user instruction; and determining the working mode of the foot type robot according to the user instruction.
In one embodiment of the present disclosure, the working mode corresponds to a plurality of objects, wherein each of the objects corresponds to a recognition accuracy requirement.
In one embodiment of the present disclosure, further comprising: acquiring the position relation among the plurality of target objects according to the working mode; and performing rotation control on the camera of the foot type robot according to the position relation among the plurality of target objects.
In one embodiment of the present disclosure, the legged robot includes a torso and a head, wherein the camera is mounted on the head and the head is rotatable relative to the torso.
In one embodiment of the present disclosure, further comprising: detecting a change in position of the target; and controlling the head to rotate according to the position change of the target object.
In one embodiment of the present disclosure, the camera is mounted on the head by a pan-tilt head. The method further comprises the following steps: detecting a change in position of the target; and controlling the holder to rotate according to the position change of the target object.
According to a second aspect of the embodiments of the present disclosure, there is provided a camera control apparatus of a legged robot, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to execute acquisition of a working mode of the legged robot and acquire a target object in the working mode and an identification precision requirement corresponding to the target object; a second acquisition module configured to execute a target object image of a target object currently photographed by a camera of the legged robot; the judging module is configured to execute the judgment of whether the target object image meets the identification precision requirement; an adjustment module configured to perform an adjustment of a focal length of the camera if the recognition accuracy requirement is not met.
In one embodiment of the present disclosure, the adjusting module includes: a first acquisition unit configured to perform acquisition of a first resolution of the target object image; a second acquisition unit configured to perform acquisition of a second resolution that satisfies the identification accuracy requirement; an adjustment unit configured to perform adjustment of a focal length of the camera according to a resolution difference of the first resolution and the second resolution.
In one embodiment of the present disclosure, further comprising: a third obtaining module configured to obtain a current distance between the legged robot and a target object if the identification accuracy requirement is still not met; and the first control module is configured to control the distance between the legged robot and the target object according to the current distance until the identification precision requirement is met.
In one embodiment of the present disclosure, the first obtaining module includes: a receiving unit configured to perform receiving a user instruction; a determining unit configured to perform determining an operating mode of the legged robot according to the user instruction.
In an embodiment of the present disclosure, the working mode corresponds to a plurality of objects, wherein each of the objects corresponds to a recognition accuracy requirement.
In one embodiment of the present disclosure, further comprising: a fourth obtaining module configured to perform obtaining of the positional relationship between the plurality of targets according to the operation mode; a second control module configured to perform rotation control of the camera of the foot robot according to a positional relationship between the plurality of targets.
In one embodiment of the present disclosure, the legged robot includes a torso and a head, wherein the camera is mounted on the head and the head is rotatable relative to the torso.
In one embodiment of the present disclosure, further comprising: a first detection module configured to perform detecting a change in position of the target object; a first rotation module configured to perform controlling the head to rotate according to a change in position of the target.
In one embodiment of the present disclosure, the camera is mounted on the head by a pan-tilt head.
The device further comprises: a second detection module configured to perform detecting a change in position of the target object; and the second rotating module is configured to control the holder to rotate according to the position change of the target object.
According to a third aspect of the embodiments of the present disclosure, there is provided a foot robot including the camera control device of the foot robot as described above.
In one embodiment of the present disclosure, further comprising: a torso; a head rotatable relative to the torso; a camera mounted over the head; a leg connected to the torso, and a foot connected to the leg.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the camera control method of the legged robot as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the camera control method of a legged robot as described above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the camera control method of a legged robot as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the embodiment of the disclosure, whether the image of the target object shot by the camera at present meets the identification precision requirement corresponding to the target object or not is automatically judged, and when the image of the target object does not meet the identification precision requirement corresponding to the target object, the focal length of the camera is adjusted, so that the target object is effectively identified, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a camera control method of a legged robot according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a camera control method of a legged robot according to another exemplary embodiment.
Fig. 3 is a flowchart illustrating a camera control method of a legged robot, according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating a camera control method of a legged robot, according to another exemplary embodiment.
Fig. 5 is a flowchart illustrating a camera control method of a legged robot, according to another exemplary embodiment.
Fig. 6 is a block diagram illustrating a camera control apparatus of a legged robot in accordance with an exemplary embodiment.
FIG. 7 is a schematic diagram of a legged robot shown in accordance with an exemplary embodiment.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
Fig. 1 illustrates a camera control method of a foot robot according to an exemplary embodiment, where it should be noted that an execution subject of the camera control method of the foot robot according to the embodiment of the present disclosure is a camera control apparatus of the foot robot, and the apparatus may be configured in an electronic device, and the electronic device may include the foot robot. The legged robot of the embodiments of the present disclosure may be a multi-degree-of-freedom legged robot, such as a two-legged robot, a four-legged robot, a three-legged robot, which may include, but is not limited to, a trunk, a head, etc., to which the embodiments of the present disclosure are not limited.
As shown in fig. 1, a camera control method of a legged robot according to an embodiment of the present disclosure includes the following steps:
step 101, obtaining a working mode of the legged robot, and obtaining a target object in the working mode and a recognition accuracy requirement corresponding to the target object.
In the embodiment of the disclosure, the working mode of the legged robot can be determined according to the user requirement, and then the target object in different working modes can be positioned to obtain the target object in the working modes, and the target object is combined with the user requirement to set the identification precision requirement corresponding to the target object. It should be noted that the working mode may correspond to a plurality of objects, and each object corresponds to the identification accuracy requirement.
For example, the operation mode is exemplified as the follow-up mode. The user demand is for following clapping the target object, the user can input and follow the relevant instruction that claps and correspond, receive and follow after clapping the relevant instruction that corresponds, can confirm the working mode of sufficient robot for following clapping the mode according to with the relevant instruction that claps and correspond, can fix a position the target object in following clapping the mode in order to acquire this target object under following clapping the mode, and set up this target object identification accuracy requirement under following clapping the mode according to this target object combination user demand, if, the machine dog follows claps the object in order to realize the object and follows clapping, the identification accuracy requirement that this object corresponds does, acquire the object image of first definition. Therefore, the working mode of the foot type robot and the identification precision requirement corresponding to the target object in the working mode can be accurately acquired.
As another example, the operation mode is exemplified as the shopping mode. The user requirement is shopping through the vending machine, the user can input a relevant instruction corresponding to shopping, after the relevant instruction corresponding to shopping is received, the working mode of the foot type robot can be determined to be the shopping mode according to the relevant instruction corresponding to shopping, then the target object in the shopping mode can be positioned to obtain the target object in the shopping mode, and the target object identification precision requirement in the shopping mode is set according to the target object and the user requirement. If the robot dog purchases the vending machine, the robot dog can be positioned in the shopping mode, the position of the vending machine is obtained, a payment page of the vending machine needs to be further obtained according to the shopping requirement of a user, and the payment page image with the second definition is obtained according to the identification precision requirement corresponding to the payment page. When the payment page meets the identification precision requirement, the payment page can be sent to the user, and the user can pay according to the payment page. For example, the identification code of the payment page is identified to complete the payment. Therefore, the working mode of the foot type robot and the identification precision requirement corresponding to the target object in the working mode can be accurately obtained.
It should be noted that the second definition is greater than the first definition.
And 102, acquiring a target object image of a target object currently shot by a camera of the legged robot.
In addition, when the target object is in a moving state, the head of the legged robot can rotate relative to the trunk in order to shoot the peripheral 360-degree direction by the camera.
As an example, a change in the position of a target object is detected; and controlling the head to rotate according to the position change of the target object.
That is, when the camera is directly attached to the head of the foot robot, the camera control device of the foot robot detects the position of the target object, and controls the head of the foot robot to rotate according to the change in the position of the target object when the position of the target object is in motion.
As another example, a change in the position of the target object is detected; and controlling the rotation of the holder according to the position change of the target object.
That is, when the camera is mounted on the head of the foot robot through the pan/tilt head, the camera control device of the foot robot can detect the position change of the target object; if the position of the target object is in motion, the holder can be controlled to rotate according to the position change of the target object.
In the embodiment of the present disclosure, the camera control device of the foot robot may be connected to the camera, the camera may upload the image of the target object currently captured by the camera to the camera control device of the foot robot, and the camera control device of the foot robot may acquire the image of the target object currently captured by the camera.
And 103, judging whether the target object image meets the identification precision requirement.
As an example, the camera control device of the legged robot may determine the recognition accuracy of the acquired target object image and determine whether the recognition accuracy of the target object image satisfies the recognition accuracy requirement, for example, may determine whether the target object image satisfies the recognition accuracy of the corresponding feature extraction, so as to perform the feature extraction.
And 104, if the identification precision requirement is not met, adjusting the focal length of the camera.
Optionally, when the target object image does not meet the identification precision requirement, a first resolution of the target object image may be acquired; acquiring a second resolution meeting the identification precision requirement; the focal length of the camera is adjusted according to the difference between the resolutions of the first and second resolutions. See the description of the embodiments that follow in detail.
In addition, when the target object image meets the identification accuracy requirement, the target object image can be sent to the user.
In conclusion, whether the image of the target object shot by the camera at present meets the identification precision requirement corresponding to the target object or not is automatically judged, and when the image of the target object does not meet the identification precision requirement corresponding to the target object, the focal length of the camera is adjusted, so that the target object can be effectively identified, and the user experience is improved.
In order to make the target object image meet the identification accuracy requirement, as shown in fig. 2, in the embodiment of the present disclosure, when the target object image does not meet the identification accuracy requirement corresponding to the target object, a first resolution of the target object image may be obtained; acquiring a second resolution meeting the identification precision requirement of the target object; adjusting the focal length of the camera according to the difference between the first resolution and the second resolution, the steps of the embodiment shown in fig. 2 are as follows:
step 201, obtaining a working mode of the legged robot, and obtaining a target object in the working mode and a recognition accuracy requirement corresponding to the target object.
Step 202, acquiring a target object image of a target object currently shot by a camera of the legged robot.
And step 203, judging whether the target object image meets the identification precision requirement.
For detailed description of steps 201-203 of the embodiment of the present disclosure, reference may be made to steps 101-103 of the embodiment shown in fig. 1, and the detailed description of the present disclosure is omitted.
And step 204, if the identification precision requirement is not met, acquiring a first resolution of the target object image.
In the embodiment of the disclosure, when the target object image does not meet the identification precision requirement corresponding to the target object, the first resolution of the target object image may be acquired. For example, when the definition of the face image does not meet the face feature extraction requirement, the resolution of the person image may be obtained by using an image resolution extraction algorithm, and the obtained resolution of the person image is used as the first resolution.
Step 205, obtaining a second resolution meeting the identification precision requirement.
In the embodiment of the present disclosure, the camera control device of the legged robot may set in advance a resolution corresponding to the requirement for the recognition accuracy, and use the resolution meeting the requirement for the recognition accuracy as the second resolution.
And step 206, adjusting the focal length of the camera according to the resolution difference between the first resolution and the second resolution.
And then, carrying out phase difference on the first resolution and the second resolution, and adjusting the focal length of the camera according to a phase difference result. For example, when the first resolution is smaller than the second resolution, the focal length of the camera may be adjusted to be elongated, and the imaging size of the target image may be increased, so as to increase the first resolution of the target image until the first resolution is equal to or the difference between the first resolution and the second resolution is minimum; for another example, when the first resolution is greater than the second resolution, the focal length of the camera may be shortened to reduce the imaging size of the target image, so as to reduce the first resolution of the target image until the first resolution is equal to or the difference between the first resolution and the second resolution is minimum.
In summary, by automatically determining whether the person image currently captured by the camera meets the person identification requirement, and when the person identification requirement is not met, adjusting the focal length of the camera according to the resolution difference between the first resolution and the second resolution, the target object can be effectively identified, so that the target object image meets the identification precision requirement. .
In order to prevent that the recognition accuracy requirement cannot be met after the focal length of the camera is adjusted, as shown in fig. 3, in the embodiment of the present disclosure, when the target object image still cannot meet the recognition accuracy requirement corresponding to the target object after the focal length of the camera is adjusted, a current distance between the foot-type robot and the target object may be obtained, and the distance between the foot-type robot and the target object is controlled according to the current distance until the target object recognition requirement is met, where the steps of the embodiment shown in fig. 3 are as follows:
step 301, obtaining a working mode of the legged robot, and obtaining a target object in the working mode and a recognition accuracy requirement corresponding to the target object.
Step 302, a target object image of a target object currently shot by a camera of the legged robot is acquired.
And step 303, judging whether the target object image meets the target object identification precision requirement.
Step 304, if the identification accuracy requirement is not satisfied, acquiring a first resolution of the person image.
Step 305, obtaining a second resolution meeting the identification precision requirement.
Step 306, adjusting the focal length of the camera according to the resolution difference between the first resolution and the second resolution.
For a detailed description of steps 301-306 of the embodiment of the present disclosure, reference may be made to steps 201-206 of the embodiment shown in fig. 2, and the detailed description of the present disclosure is omitted.
And 307, if the identification precision requirement is still not met, acquiring the current distance between the foot type robot and the target object.
In the embodiment of the present disclosure, if the focal length of the camera is adjusted according to the difference between the first resolution and the second resolution, and the identification accuracy requirement is still not satisfied, the current distance between the full-size robot and the target object may be obtained. For example, the current position of the legged robot may be used as the origin of coordinates, the position of the target object may be used as the target point, and the current distance between the legged robot and the target object may be calculated according to a distance formula.
And 308, controlling the distance between the foot type robot and the target object according to the current distance until the requirement of identification precision is met.
As an example, when the first resolution is smaller than the second resolution and the recognition accuracy requirement is still not met after the focal length of the camera is adjusted according to the difference between the first resolution and the second resolution, the full-size robot may be controlled to approach the target object so as to reduce the distance between the full-size robot and the target object until the recognition accuracy requirement is met.
As another example, after the first resolution is greater than the second resolution and the focal length of the camera is adjusted according to the difference between the first resolution and the second resolution, and the recognition accuracy requirement is still not met, the legged robot may be controlled to move away from the target object, so that the distance between the legged robot and the target object is increased until the recognition accuracy requirement is met.
In summary, after the focal length of the camera is adjusted, when the target object image still cannot meet the identification accuracy requirement, the current distance between the foot robot and the target object may be acquired, and the distance between the foot robot and the target object may be controlled according to the current distance until the identification accuracy requirement is met.
In order to accurately determine the working mode of the legged robot, as shown in fig. 4, in the embodiment of the present disclosure, the working mode of the legged robot may be determined according to a user instruction, and the embodiment shown in fig. 4 may include the following steps:
step 401, receiving a user instruction.
In the embodiment of the disclosure, the camera control device of the foot robot can provide a user interaction interface to interact with a user, the user can input a corresponding instruction in the user interaction interface according to requirements, and the camera control device of the foot robot can receive the instruction input by the user.
Step 402, determining the working mode of the legged robot according to the user instruction.
It will be appreciated that the user instructions may characterize the user requirements and the operating mode of the legged robot may be determined from the instructions corresponding to the user requirements. For example, the user requirement is to track the target object, and the working mode of the legged robot can be determined to be the follow shooting mode according to the instruction corresponding to the user requirement. For another example, the user demand is shopping, and the operating mode of the legged robot can be determined to be a shopping mode according to an instruction corresponding to the user demand.
And step 403, acquiring the target object in the working mode and the identification precision requirement corresponding to the target object.
Step 404, acquiring a target object image of a target object currently shot by a camera of the legged robot.
Step 405, judging whether the target object image meets the identification precision requirement.
And step 406, if the identification precision requirement is not met, adjusting the focal length of the camera.
It should be noted that, steps 403 to 406 may be implemented by using any implementation manner in each embodiment of the present disclosure, and this is not limited by the embodiments of the present disclosure and is not described again.
In summary, the working mode of the foot robot can be accurately determined according to the user instruction by receiving the user instruction and determining the working mode of the foot robot according to the user instruction.
In order to accurately photograph a target object to be photographed in an operation mode, as shown in fig. 5, in an embodiment of the present disclosure, the operation mode of the foot robot may include one or more target objects, when the operation mode of the foot robot includes one target object, the target object may be positioned, and then a camera of the foot robot may be used to photograph a target object image of the target object, and when the operation mode of the foot robot includes a plurality of target objects, a positional relationship between the plurality of target objects may be acquired according to the operation mode, and then the camera of the foot robot may be rotated according to the positional relationship. The embodiment shown in fig. 5 may include the following steps:
step 501, obtaining a working mode of the legged robot, and obtaining a target object in the working mode and a recognition accuracy requirement corresponding to the target object. The working mode corresponds to a plurality of objects, wherein each object corresponds to the identification precision requirement.
For example, the operation mode of the legged robot is a shopping mode, and the plurality of objects in the shopping mode may be a vending machine and a payment page of the vending machine. The payment pages of the vending machine correspond to different identification precision requirements respectively. For example, the identification accuracy requirements of the payment page of the vending machine are higher than the identification accuracy requirements of the vending machine.
Step 502, obtaining the position relation among a plurality of objects according to the working mode.
In the embodiment of the disclosure, after the plurality of objects in the working mode are acquired, the corresponding relationship between the plurality of objects may be further determined according to the working mode, for example, the working mode is a shopping mode, and the legged robot may be positioned to acquire the position relationship between the vending machine and the payment page of the vending machine.
Step 503, controlling the rotation of the camera of the foot robot according to the position relation among the plurality of targets.
Furthermore, the camera of the legged robot can be controlled to rotate according to the positional relationship between the plurality of objects, and the object to be photographed in the operation mode can be accurately specified. For example, the working mode is shopping mode, and foot formula robot walks to the vending machine before, can upwards rotate the camera of foot formula robot in order to find the payment page of vending machine according to the position relation of vending machine payment page and vending machine to the payment page of vending machine is shot.
And step 504, acquiring a target object image of a target object currently shot by a camera of the legged robot.
And 505, judging whether the target object image meets the identification precision requirement.
Step 506, if the identification precision requirement is not met, the focal length of the camera is adjusted.
It should be noted that, steps 504 to 506 may be implemented by using any one of the faithful implementation manners in the embodiments of the present disclosure, which are not limited in the embodiments of the present disclosure and are not described again.
In summary, the positional relationship between the plurality of objects is acquired according to the operation mode, and the camera of the legged robot is controlled to rotate according to the positional relationship between the plurality of objects, whereby the object to be photographed in the operation mode can be photographed accurately.
According to the camera control method of the foot robot, the working mode of the foot robot is obtained, and the target object in the working mode and the identification precision requirement corresponding to the target object are obtained; acquiring a target object image of a target object currently shot by a camera of the foot type robot; judging whether the target object image meets the identification precision requirement or not; if the identification precision requirement is not met, the focal length of the camera is adjusted, and the method automatically judges whether the figure image currently shot by the camera meets the identification precision requirement or not, and adjusts the focal length of the camera when the identification precision requirement is not met, so that the target object identification can be effectively carried out, and the user experience is improved.
Fig. 6 is a block diagram of a camera control apparatus of a legged robot, according to an exemplary embodiment. As shown in fig. 6, the camera control device 600 of the foot robot includes: a first obtaining module 610, a second obtaining module 620, a judging module 630 and an adjusting module 640.
The first obtaining module 610 is configured to execute obtaining of a working mode of the legged robot, and obtain a target object in the working mode and an identification precision requirement corresponding to the target object; a second acquiring module 620 configured to perform acquiring a target image of a target currently photographed by a camera of the legged robot; a judging module 630 configured to perform judgment on whether the target object image satisfies the identification accuracy requirement; an adjustment module 640 configured to perform an adjustment of the focal length of the camera if the recognition accuracy requirement is not met.
As a possible implementation manner of the embodiment of the present disclosure, the adjusting module 640 includes: the device comprises a first acquisition unit, a second acquisition unit and an adjustment unit.
The first acquisition unit is configured to acquire a first resolution of an image of a target object; a second acquisition unit configured to perform acquisition of a second resolution that satisfies the identification accuracy requirement; an adjustment unit configured to perform adjustment of a focal length of the camera according to a resolution difference of the first resolution and the second resolution.
As a possible implementation manner of the embodiment of the present disclosure, the camera control apparatus 600 of the foot robot further includes: the device comprises a third acquisition module and a first control module.
The third acquisition module is configured to acquire the current distance between the legged robot and the target object if the identification precision requirement is still not met; and the first control module is configured to control the distance between the foot type robot and the target object according to the current distance until the identification precision requirement is met.
As a possible implementation manner of the embodiment of the present disclosure, the first obtaining module 610 includes: a receiving unit and a determining unit.
Wherein the receiving unit is configured to execute receiving a user instruction; a determining unit configured to determine the working mode of the legged robot according to the user instruction.
As a possible implementation manner of the embodiment of the present disclosure, the working mode corresponds to a plurality of objects, where each of the objects corresponds to a recognition accuracy requirement.
As a possible implementation manner of the embodiment of the present disclosure, the camera control apparatus 600 of the foot robot further includes: the device comprises a fourth acquisition module and a second control module.
The fourth acquisition module is configured to acquire the position relation among the plurality of targets according to the working mode; and the second control module is configured to execute rotation control on the camera of the foot robot according to the position relation among the plurality of targets.
As one possible implementation of the disclosed embodiments, a legged robot includes a torso and a head, wherein a camera is mounted on the head and the head is rotatable relative to the torso.
As a possible implementation manner of the embodiment of the present disclosure, the camera control apparatus 600 of the foot robot further includes: the device comprises a first detection module and a first rotation module.
The first detection module is configured to detect the position change of the target object; a first rotation module configured to perform a head rotation control according to a position change of the target object.
As one possible implementation of the disclosed embodiment, the camera is mounted on the head through a pan-tilt. The camera control apparatus 600 of the legged robot further includes: the second detection module and the second rotation module.
Wherein the second detection module is configured to perform detecting a change in position of the target object; and the second rotating module is configured to control the holder to rotate according to the position change of the target object.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The camera control device of the foot robot in the embodiment of the disclosure acquires the working mode of the foot robot and acquires the target object in the working mode and the identification precision requirement corresponding to the target object; acquiring a target object image of a target object currently shot by a camera of the legged robot; judging whether the target object image meets the identification precision requirement or not; if the identification precision requirement is not met, the focal length of the camera is adjusted, the device can automatically judge whether the image of the target object currently shot by the camera meets the identification precision requirement, and when the identification precision requirement is not met, the focal length of the camera is adjusted, so that the target object can be effectively identified, and the user experience is improved.
In order to implement the above embodiment, as shown in fig. 7, an embodiment of the present disclosure further provides a foot robot 700 including a camera control device 710 of the foot robot. The camera control device 710 of the foot robot has the same configuration and function as the camera control device 600 of the foot robot in fig. 6.
As a possible implementation manner of the embodiment of the present disclosure, the legged robot 700 further includes: a torso, a head, a camera mounted over the head, legs connected to the torso, and feet connected to the legs.
Fig. 8 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. As shown in fig. 8, the electronic device 800 includes:
a memory 810 and a processor 820, a bus 830 connecting different components (including the memory 810 and the processor 820), wherein the memory 810 stores a computer program, and when the processor 820 executes the program, the camera control method of the legged robot according to the embodiment of the present disclosure is implemented.
Bus 830 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The electronic device 800 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 800 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 810 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 840 and/or cache memory 850. The electronic device 800 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 860 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard drive"). Although not shown in FIG. 8, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 830 by one or more data media interfaces. Memory 810 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 880 having a set (at least one) of program modules 870, which may include but are not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may include an implementation of a network environment, may be stored in, for example, memory 810. Program modules 870 generally perform the functions and/or methodologies of embodiments described in this disclosure.
The electronic device 800 may also communicate with one or more external devices 890 (e.g., keyboard, pointing device, display 891, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 892. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 893. As shown in FIG. 8, the network adapter 893 communicates with the other modules of the electronic device 800 over a bus 830. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, to name a few.
The processor 820 executes various functional applications and data processing by executing programs stored in the memory 810.
It should be noted that, for the implementation process and the technical principle of the electronic device of the embodiment, reference is made to the foregoing explanation of the camera control method of the legged robot according to the embodiment of the present disclosure, and details are not described herein again.
The electronic device provided by the embodiment of the disclosure can execute the camera control method of the foot robot as described above, and obtain the target object and the identification accuracy requirement corresponding to the target object in the working mode by obtaining the working mode of the foot robot; acquiring a target object image of a target object currently shot by a camera of the legged robot; judging whether the target object image meets the identification precision requirement or not; if the identification precision requirement is not met, the focal length of the camera is adjusted, and the method automatically judges whether the image of the target object currently shot by the camera meets the identification precision requirement or not, and adjusts the focal length of the camera when the identification precision requirement is not met, so that the target object can be effectively identified, and the user experience is improved.
In order to implement the above embodiments, the present disclosure also proposes a computer-readable storage medium.
Wherein the instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the camera control method of the legged robot as described above. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
To achieve the above embodiments, the present disclosure also provides a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor of an electronic device, implement the camera control method of a legged robot as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (23)

1. A camera control method of a legged robot, comprising:
acquiring a working mode of the foot type robot, and acquiring a target object in the working mode and an identification precision requirement corresponding to the target object;
acquiring a target object image of a target object currently shot by a camera of the foot robot;
judging whether the target object image meets the identification precision requirement or not;
and if the identification precision requirement is not met, adjusting the focal length of the camera.
2. The camera control method of the legged robot according to claim 1, characterized in that said adjusting the focal length of the camera comprises:
acquiring a first resolution of the target object image;
acquiring a second resolution meeting the identification precision requirement;
adjusting a focal length of the camera according to a resolution difference between the first resolution and the second resolution.
3. The camera control method of a legged robot according to claim 1, characterized by further comprising, after said adjusting the focal length of said camera:
if the identification precision requirement is still not met, acquiring the current distance between the foot type robot and the target object;
and controlling the distance between the foot type robot and the target object according to the current distance until the identification precision requirement is met.
4. The camera control method of the legged robot according to claim 1, wherein said obtaining the operation mode of the legged robot includes:
receiving a user instruction;
and determining the working mode of the foot type robot according to the user instruction.
5. The camera control method of a legged robot according to claim 1, characterized in that said operation mode corresponds to a plurality of objects, each of which corresponds to a recognition accuracy requirement.
6. The camera control method of the legged robot according to claim 5, characterized by further comprising:
acquiring the position relation among the multiple target objects according to the working mode;
and performing rotation control on the camera of the foot type robot according to the position relation among the plurality of target objects.
7. The camera control method of the legged robot according to claim 1, characterized in that the legged robot includes a trunk and a head, wherein the camera is mounted on the head, and the head is rotatable with respect to the trunk.
8. The camera control method of the legged robot according to claim 7, characterized by further comprising:
detecting a change in position of the target;
and controlling the head to rotate according to the position change of the target object.
9. The method of controlling a camera of a legged robot according to claim 7, wherein the camera is mounted on the head by a pan-tilt, the method further comprising:
detecting the position change of the target person;
and controlling the holder to rotate according to the position change of the target person.
10. A camera control apparatus of a legged robot, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to execute the acquisition of a working mode of the legged robot and acquire a target object in the working mode and an identification precision requirement corresponding to the target object;
a second acquisition module configured to perform acquisition of a target image of a target currently photographed by a camera of the legged robot;
the judging module is configured to execute the judgment of whether the target object image meets the identification precision requirement;
an adjustment module configured to perform an adjustment of a focal length of the camera if the recognition accuracy requirement is not met.
11. The camera control apparatus of the legged robot according to claim 10, characterized in that the adjusting module includes:
a first acquisition unit configured to perform acquisition of a first resolution of the target object image;
a second acquisition unit configured to perform acquisition of a second resolution that satisfies the identification accuracy requirement;
an adjustment unit configured to perform adjustment of a focal length of the camera according to a resolution difference of the first resolution and the second resolution.
12. The camera control apparatus of the legged robot according to claim 10, characterized in that the apparatus further includes:
a third obtaining module configured to obtain a current distance between the legged robot and a target object if the identification accuracy requirement is still not met;
and the first control module is configured to control the distance between the legged robot and the target object according to the current distance until the identification precision requirement is met.
13. The camera control apparatus of the legged robot according to claim 10, characterized in that the first acquisition module includes:
a receiving unit configured to perform receiving a user instruction;
a determining unit configured to determine the working mode of the legged robot according to the user instruction.
14. The camera control device of the legged robot according to claim 10, wherein the operation mode corresponds to a plurality of objects each corresponding to a recognition accuracy requirement.
15. The camera control apparatus of the legged robot according to claim 14, characterized by further comprising:
a fourth obtaining module configured to obtain the position relation among the plurality of targets according to the working mode;
a second control module configured to perform rotation control of the camera of the foot robot according to a positional relationship between the plurality of targets.
16. The camera control apparatus of the legged robot according to claim 10, characterized in that the legged robot includes a trunk and a head, wherein the camera is mounted above the head, and the head is rotatable with respect to the trunk.
17. The camera control apparatus of the legged robot according to claim 16, characterized by further comprising:
a first detection module configured to perform detecting a change in position of the target object;
a first rotation module configured to perform controlling the head to rotate according to a change in the position of the target.
18. The camera control device of a legged robot according to claim 16, characterized in that said camera is mounted on said head by means of a pan-tilt head, said device further comprising:
a second detection module configured to perform detecting a change in position of the target object;
and the second rotating module is configured to control the holder to rotate according to the position change of the target object.
19. A legged robot, comprising: a camera control apparatus for a legged robot as claimed in any one of claims 10-18.
20. The legged robot according to claim 19, further comprising:
a torso;
a head rotatable relative to the torso;
a camera mounted over the head;
a leg connected to the torso, and a foot connected to the leg.
21. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the camera control method of the legged robot as claimed in any one of claims 1-9.
22. A computer-readable storage medium, whose instructions, when executed by a processor of an electronic device, enable the electronic device to perform the camera control method of the legged robot of any one of claims 1-9.
23. A computer program product, characterized in that it comprises a computer program which, when executed by a processor, implements the camera control method of a legged robot according to any one of claims 1 to 9.
CN202110900113.7A 2021-08-06 2021-08-06 Camera control method and device for foot type robot and foot type robot Pending CN115706854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110900113.7A CN115706854A (en) 2021-08-06 2021-08-06 Camera control method and device for foot type robot and foot type robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110900113.7A CN115706854A (en) 2021-08-06 2021-08-06 Camera control method and device for foot type robot and foot type robot

Publications (1)

Publication Number Publication Date
CN115706854A true CN115706854A (en) 2023-02-17

Family

ID=85179117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110900113.7A Pending CN115706854A (en) 2021-08-06 2021-08-06 Camera control method and device for foot type robot and foot type robot

Country Status (1)

Country Link
CN (1) CN115706854A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4998126A (en) * 1988-11-04 1991-03-05 Nikon Corporation Automatic focus adjustment camera
US20090304237A1 (en) * 2005-06-29 2009-12-10 Kyocera Corporation Biometric Authentication Apparatus
CN202135260U (en) * 2011-07-30 2012-02-01 山东电力研究院 High-definition image detection system of transformer station inspection robot
CN102595049A (en) * 2012-03-16 2012-07-18 盛司潼 Automatic focusing control system and method
US20130296737A1 (en) * 2012-05-02 2013-11-07 University Of Maryland, College Park Real-time tracking and navigation system and method for minimally invasive surgical procedures
US20160379071A1 (en) * 2015-06-25 2016-12-29 Beijing Lenovo Software Ltd. User Identification Method and Electronic Device
CN108600691A (en) * 2018-04-02 2018-09-28 深圳臻迪信息技术有限公司 Image-pickup method, apparatus and system
CN109413373A (en) * 2018-02-07 2019-03-01 中科太网科技(北京)有限公司 Video monitoring equipment control method, device, video monitoring equipment and server
WO2019100814A1 (en) * 2017-11-24 2019-05-31 阿里巴巴集团控股有限公司 Method and apparatus for assisting image of article complaying with requirements, and electronic device
CN110293554A (en) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 Control method, the device and system of robot
CN110378165A (en) * 2019-05-31 2019-10-25 阿里巴巴集团控股有限公司 Two-dimensional code identification method, two dimensional code fixation and recognition method for establishing model and its device
CN110850872A (en) * 2019-10-31 2020-02-28 深圳市优必选科技股份有限公司 Robot inspection method and device, computer readable storage medium and robot
CN112004025A (en) * 2020-09-02 2020-11-27 广东电网有限责任公司 Unmanned aerial vehicle automatic driving zooming method, system and equipment based on target point cloud
CN112345076A (en) * 2020-09-16 2021-02-09 北京卓立汉光仪器有限公司 Spectrum-taking system capable of adjusting resolution ratio and spectrum-taking machine

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4998126A (en) * 1988-11-04 1991-03-05 Nikon Corporation Automatic focus adjustment camera
US20090304237A1 (en) * 2005-06-29 2009-12-10 Kyocera Corporation Biometric Authentication Apparatus
CN202135260U (en) * 2011-07-30 2012-02-01 山东电力研究院 High-definition image detection system of transformer station inspection robot
CN102595049A (en) * 2012-03-16 2012-07-18 盛司潼 Automatic focusing control system and method
US20130296737A1 (en) * 2012-05-02 2013-11-07 University Of Maryland, College Park Real-time tracking and navigation system and method for minimally invasive surgical procedures
US20160379071A1 (en) * 2015-06-25 2016-12-29 Beijing Lenovo Software Ltd. User Identification Method and Electronic Device
WO2019100814A1 (en) * 2017-11-24 2019-05-31 阿里巴巴集团控股有限公司 Method and apparatus for assisting image of article complaying with requirements, and electronic device
CN109413373A (en) * 2018-02-07 2019-03-01 中科太网科技(北京)有限公司 Video monitoring equipment control method, device, video monitoring equipment and server
CN110293554A (en) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 Control method, the device and system of robot
CN108600691A (en) * 2018-04-02 2018-09-28 深圳臻迪信息技术有限公司 Image-pickup method, apparatus and system
CN110378165A (en) * 2019-05-31 2019-10-25 阿里巴巴集团控股有限公司 Two-dimensional code identification method, two dimensional code fixation and recognition method for establishing model and its device
CN110850872A (en) * 2019-10-31 2020-02-28 深圳市优必选科技股份有限公司 Robot inspection method and device, computer readable storage medium and robot
CN112004025A (en) * 2020-09-02 2020-11-27 广东电网有限责任公司 Unmanned aerial vehicle automatic driving zooming method, system and equipment based on target point cloud
CN112345076A (en) * 2020-09-16 2021-02-09 北京卓立汉光仪器有限公司 Spectrum-taking system capable of adjusting resolution ratio and spectrum-taking machine

Similar Documents

Publication Publication Date Title
US10726264B2 (en) Object-based localization
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
US8452080B2 (en) Camera pose estimation apparatus and method for augmented reality imaging
Pintaric et al. Affordable infrared-optical pose-tracking for virtual and augmented reality
US20030215130A1 (en) Method of processing passive optical motion capture data
KR20140090078A (en) Method for processing an image and an electronic device thereof
US8531505B2 (en) Imaging parameter acquisition apparatus, imaging parameter acquisition method and storage medium
CN110909580A (en) Data processing method and device, electronic equipment and storage medium
CN111951326A (en) Target object skeleton key point positioning method and device based on multiple camera devices
CN113393505B (en) Image registration method, visual positioning method, related device and equipment
Makibuchi et al. Vision-based robust calibration for optical see-through head-mounted displays
US10623629B2 (en) Imaging apparatus and imaging condition setting method and program
KR101431840B1 (en) Method, apparatus, and system of tracking a group of motion capture markers in a sequence of frames, and storage medium
CN110853102A (en) Novel robot vision calibration and guide method, device and computer equipment
Scheuermann et al. Mobile augmented reality based annotation system: A cyber-physical human system
Najafi et al. Automated initialization for marker-less tracking: A sensor fusion approach
CN115706854A (en) Camera control method and device for foot type robot and foot type robot
KR20200096426A (en) Moving body detecting device, moving body detecting method, and moving body detecting program
Miyano et al. Camera pose estimation of a smartphone at a field without interest points
Uematsu et al. D-calib: Calibration software for multiple cameras system
CN112348874A (en) Method and device for determining structural parameter representation of lane line
US20230410451A1 (en) Augmented reality implement apparatus and method using mobile scanned object model scaling
Torresani A portable V-SLAM based solution for advanced visual 3D mobile mapping
US20240005529A1 (en) Software-based object tracking method and computing device therefor
CN114219867A (en) Method and device for calibrating camera, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination