CN113810614A - Video recording method and device and wearable device - Google Patents

Video recording method and device and wearable device Download PDF

Info

Publication number
CN113810614A
CN113810614A CN202111108701.3A CN202111108701A CN113810614A CN 113810614 A CN113810614 A CN 113810614A CN 202111108701 A CN202111108701 A CN 202111108701A CN 113810614 A CN113810614 A CN 113810614A
Authority
CN
China
Prior art keywords
camera
recording
video
shooting
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111108701.3A
Other languages
Chinese (zh)
Inventor
李伟超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202111108701.3A priority Critical patent/CN113810614A/en
Publication of CN113810614A publication Critical patent/CN113810614A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The application is suitable for the technical field of wearing equipment, and provides a video recording method, a video recording device and wearing equipment, wherein the method comprises the following steps: if a video recording command is received, controlling a camera of the wearable device to execute a shooting action; determining a recording scene through an image shot by the camera; adjusting the shooting angle of the camera according to the recording scene; and executing a recording action according to the camera adjusted by the shooting angle to obtain a recorded video. By the method, the quality of the obtained video can be improved.

Description

Video recording method and device and wearable device
Technical Field
The present application belongs to the technical field of wearable devices, and in particular, to a video recording method and apparatus, a wearable device, and a computer-readable storage medium.
Background
The existing child telephone watch has the functions of making a call, positioning and the like, and also has the functions of recording videos and the like.
When a child records a video by using a child telephone watch worn by the child, the child usually needs to move an arm wearing the child telephone watch back and forth so that a camera of the child telephone watch can be aligned with a scene which the child wishes to record, and then the recorded video is obtained. The quality of the recorded video is poor because the picture shakes due to the back and forth movement of the arm.
Disclosure of Invention
The embodiment of the application provides a video recording method, a video recording device and wearing equipment, and the quality of obtained videos can be improved.
In a first aspect, an embodiment of the present application provides a video recording method, which is applied to a wearable device, and includes:
if a video recording command is received, controlling a camera of the wearable device to execute a shooting action;
determining a recording scene through an image shot by the camera;
adjusting the shooting angle of the camera according to the recording scene;
and executing a recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
In a second aspect, an embodiment of the present application provides a video recording apparatus, which is applied to a wearable device, and includes:
the video recording command receiving module is used for controlling a camera of the wearable device to execute a shooting action if a video recording command is received;
the scene recognition module is used for determining a recording scene through an image shot by the camera;
the camera control module is used for adjusting the shooting angle of the camera according to the recording scene;
and the video recording module is used for executing recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
In a third aspect, an embodiment of the present application provides a wearable device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a wearable device, causes the wearable device to perform the method of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, after receiving a video recording command, the wearable device controls the camera to execute a shooting action, identifies a recording scene according to an image obtained by shooting by the camera, adjusts a shooting angle of the camera according to the identified recording scene, and executes the recording action through the camera after the shooting angle is adjusted to obtain a recorded video. Because wearing equipment can adjust the shooting angle of camera according to the scene of recording that its discernment obtained, consequently for the camera after shooting angle adjustment more accords with shoots the current scene of recording, thereby guarantees that wearing equipment records the video more accurate. In addition, because the video is recorded by adjusting the shooting angle of the camera, the user does not need to move the limb wearing the wearable device back and forth, so that the picture of the recorded video is prevented from shaking, the stability of the recorded video is improved, and the quality of the recorded video is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a flowchart of a video recording method according to an embodiment of the present application;
fig. 2 is a flowchart of another video recording method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a wearable device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
The first embodiment is as follows:
at present, a user often uses the child telephone watch to record videos when wearing the child telephone watch, but in the recording process, if a shot main body is in a moving state, the user also needs to move an arm back and forth to align the shot main body, and the user moves the arm back and forth to cause picture shaking, so that the quality of the recorded videos is poor.
In order to solve the technical problem, an embodiment of the present application provides a video recording method, in which a wearable device adjusts a shooting angle of a camera on the wearable device according to a recording scene where the wearable device is currently located, and executes a recording action through the camera with the adjusted shooting angle, so as to obtain a recorded video.
Because the shooting angle of the camera of the wearable device can be automatically adjusted according to the recording scene of the wearable device and the video is recorded, the limb wearing the wearable device does not need to be moved back and forth by a user, the picture of the recorded video is prevented from shaking, and the quality of the recorded video is improved.
The following describes a video recording method provided in an embodiment of the present application with reference to the drawings.
Fig. 1 shows a flowchart of a video recording method provided in an embodiment of the present application, which is applied to a wearable device, and a shooting angle of a camera of the wearable device can be changed between an upper 180 degrees and a lower 180 degrees and a left 360 degrees and a right 360 degrees, for example, the camera can be directly controlled to rotate by a motor (e.g., a stepping motor) built in the camera, so as to ensure that the camera has each shooting angle, or the camera can be ensured to have each shooting angle by controlling the motor built in the camera and rotating a module (e.g., a dial plate of a child telephone watch) where the camera is located. Because this camera has each shooting angle, consequently for the shooting scope of this camera is wider, thereby makes wearing equipment keep flat and also can realize omnidirectional shooting on the desktop. The video recording method provided by the embodiment of the application is detailed as follows:
and step S11, if a video recording command is received, controlling the camera of the wearable device to execute a shooting action.
In this embodiment, the camera executes a shooting operation to obtain a shot image.
In this embodiment, the user can send a video recording command by directly pressing a key for recording a video in the wearable device.
In some embodiments, if the wearable device is equipped with a microphone, the video recording command can be spoken by voice. Specifically, the wearable device detects whether the voice is received through the microphone, recognizes the received voice through a voice recognition algorithm to obtain a voice recognition result, and judges whether a video recording command is received according to the voice recognition result. For example, assuming that the wearable device recognizes that the voice includes information of "record video", it is determined that a video recording command is received.
In some embodiments, if the wearable device establishes a connection (e.g., a bluetooth connection) with another terminal device (e.g., a mobile phone), the user may send a video recording command to the wearable device through the mobile phone, for example, the user sends the video recording command to the wearable device by pressing a designated key of the mobile phone.
In some embodiments, if the lens of the camera in the wearable device faces upward (towards the sky) before the wearable device receives the video recording command, the camera is flipped up first and then the shooting action is performed. It should be noted that after the camera is turned up, the lens of the camera is usually not facing upward any more, but under the driving of the motor, the lens of the camera can also face upward to ensure that the camera has various shooting angles. In this embodiment, since the person to be recorded is not usually above the lens, the shooting operation is performed after the camera is turned up, and a more accurate image can be obtained.
And step S12, determining a recording scene through the image shot by the camera.
In this embodiment, scene characteristics of different recording scenes are preset. After the image is shot by the camera, the wearable device extracts the features contained in the image shot by the camera, compares the analyzed features with the preset scene features of each recording scene, and judges that the recording scene where the character to be recorded is located is the certain recording scene if the analyzed features are matched with the scene features of the certain recording scene.
For example, assume a basketball game scenario has scenario features including: the basketball game method comprises the steps that a basketball hoop, N (N is larger than 1) persons, target actions (such as running, dribbling, passing, shooting and the like) exist in the N persons, and if the characteristics contained in an image shot by a camera are analyzed to be matched with the scene characteristics of the listed basketball game scenes, the recorded scene where the character to be recorded is located is judged to be the basketball game scene.
And step S13, adjusting the shooting angle of the camera according to the recording scene.
In this embodiment, the shooting angle of wearing equipment to the camera according to recording the scene adjusts, for example, to recording scene 1, the motor control camera is with first speed from left to right translation, and to scene 2, the motor control camera is with second speed from the top down or from the bottom up rotation. In this embodiment, because the user often has different shooting requirements in different recording scenes, the shooting angle of the camera is correspondingly adjusted according to the difference in the recording scenes, and the accuracy of the recorded video can be improved.
And step S14, executing recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
In this embodiment, during and after the adjustment of the shooting angle of the camera, the camera will continue to perform the recording operation, and the recording of the video will not be stopped until the recording ending command is received, so as to obtain the recorded complete video.
In the embodiment of the application, after receiving a video recording command, the wearable device controls the camera to execute a shooting action, performs scene recognition according to an image obtained by the camera, adjusts the shooting angle of the camera according to the recognized recording scene, and executes the recording action according to the camera adjusted by the shooting angle to obtain a recorded video. Because wearing equipment can carry out the adjustment that corresponds to the shooting angle of camera according to the scene of recording that its discernment obtained, consequently, make the camera after the shooting angle adjustment more be fit for recording the scene of recording that wearing equipment is current to be located, thereby guarantee that wearing equipment records the video more accurate. In addition, because the video is recorded by adjusting the shooting angle of the camera, the user does not need to move the limb wearing the wearable device back and forth, so that the picture of the recorded video is prevented from shaking, the stability of the recorded video is improved, and the quality of the recorded video is improved.
Fig. 2 shows a flowchart of another video recording method provided in this embodiment of the present application, and in this embodiment, a ball game scene is taken as an example for description, and step S21, step S22, and step S25 are respectively the same as step S11, step S12, and step S14, and are not repeated here.
And step S21, if a video recording command is received, controlling the camera of the wearable device to execute a shooting action.
And step S22, determining a recording scene through the image shot by the camera.
Step S23, if the recorded scene is a ball game scene, selecting a corresponding camera control strategy according to the ball game scene, where the camera control strategy includes tracking balls and contacts of the balls.
In this embodiment, it is considered that in the ball game, players on the field need to run back and forth on the field, and win and loss of the ball game are related to the position of the ball. For example, assuming that the ball game is a basketball game, the win-or-loss of the basketball game is related to whether the basketball is thrown into the basket, that is, when the user watches the basketball game, the user usually focuses on the basketball, the ball sender and the ball receiver, and therefore, in the embodiment of the present application, the camera control strategy corresponding to the ball game scene is set to include the ball and the ball contact person, which is beneficial to more accurately controlling the shooting angle of the camera in the following process.
And step S24, adjusting the shooting angle of the camera according to the camera control strategy.
In this embodiment, in the process of performing the track shot on the ball and the ball contact person, if it is recognized that the ball contact person runs with the ball, the lens of the camera is controlled to be aligned with the ball contact person or aligned with the basketball. If the fact that a contact person of the ball is to pass is identified, the motion trail of the ball is determined, the moving direction and the distance of the ball are predicted according to the motion trail of the ball, and finally the shooting angle of the camera is adjusted according to the prediction result (the moving direction and the distance) so that the lens of the camera is aligned to the pre-judged direction where the ball is passed, and the ball after being passed can be locked quickly.
And step S25, executing recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
In some embodiments, the ball or the contacter of the ball is kept in the center of the picture in the video recorded by the camera, so that the subsequent user can more clearly view the video of the ball or the contacter of the ball.
In the embodiment of the application, the camera control strategy corresponding to the ball game scene comprises the contact persons tracking balls and balls, and the wearable device adjusts the shooting angle of the camera according to the camera control strategy, so that the camera in the shooting angle adjustment and after the shooting angle adjustment can be ensured to be aligned to the contact persons of the balls or the balls in time, and the quality of the shot ball game video can be improved. Meanwhile, manual intervention is not needed, so that the convenience in recording the ball game video is greatly improved.
In some embodiments, the video recording method further includes:
and A1, predicting the action of the contacter in the process of recording the video.
In this embodiment, in the process of tracking a ball, if it is detected that the ball is in a hand of a player, each skeletal key point of the player in a plurality of adjacent image frames is obtained through a preset human skeletal key point detection algorithm, and an action to be performed by the player is predicted according to each obtained skeletal key point.
And A2, if the predicted action of the contacter is the target action, adjusting the shooting parameters of the camera.
In this embodiment, the shooting parameters include a magnification, and after the magnification of the camera is amplified, the proportion of the contact person in the picture is amplified, so that a user can conveniently view the action details of the contact person; on the contrary, the proportion of the contact person in the picture is reduced, but the shooting range is expanded, so that the user can conveniently check the actions of the competition personnel in a large range.
In some embodiments, if the predicted motion of the contact person is not the target motion, the shooting angle of the camera may be adjusted according to a camera control strategy without adjusting the shooting parameters of the camera. For example, if the predicted action of the contacter is a pass action, the orientation of the ball after passing is predicted, and the shooting angle of the camera is adjusted according to the predicted result, so that the lens of the camera is aligned with the predicted orientation of the ball after passing.
Correspondingly, the step S25 includes:
and executing a recording action according to the camera with the adjusted shooting angle and the adjusted shooting parameters to obtain a recorded video.
In this embodiment, for example, if the shooting parameter includes a magnification, the video is recorded with the adjusted magnification, and a video with a corresponding magnification is obtained.
In the above-mentioned a1 to a2, since the magnification of the camera is adjusted according to the predicted movement of the contacter, the recorded video has different magnifications, that is, the user can view the video with different magnifications, thereby improving the user's experience.
In some embodiments, the ball game scenario is a basketball game scenario, and the a2 includes:
and a21, if the target motion is a pitching motion, reducing the magnification of the camera.
For example, the camera with the reduced magnification can shoot a basketball and a contact person at the same time.
And a22, if the target motion is taken as the basket-up motion, magnifying the magnification of the camera.
The above-mentioned upper basket is also called a take-out basket, that is, an action of directly delivering a ball to a basket by a player, and the shooting is an action of not directly delivering a ball to a basket by a player.
In the embodiment, in an actual viewing scene, when a player shoots a basketball, the spectator usually needs to move the sight line so as to track the basketball at a larger visual angle, and then can watch whether the shot basketball enters the basket or not in time; when the players play, the spectators usually want to be able to check the fine movements of the players, so the a21 and a22 select the zoom-out or zoom-in magnification of the camera according to whether the player plays the ball or plays the basket, that is, the ratio and position of the players on the picture are specifically adjusted, so as to ensure that the video more meeting the user's requirements is obtained, capture the details of the movements of the players in close-up, and enable the picture to have more impact force.
In some embodiments, the step S21 (step S11) includes:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a specified mode, controlling a camera of the wearable device to execute a shooting action.
In this embodiment, the recording mode of the wearable device has at least 2 modes: a designated mode and a non-designated mode. When a user needs to record a video, the video recording mode is selected first, and the wearable device executes corresponding actions according to the recording mode selected by the user. For example, when the recording mode selected by the user is the designated mode, the camera is controlled to execute the shooting action, so that the scene can be analyzed according to the shot images.
In some embodiments, the controlling the camera of the wearable device to perform a shooting action includes:
and controlling the camera of the wearable device to be turned up, and respectively executing shooting actions at least 2 different shooting angles.
In this embodiment, accessible motor drive camera is turned up to guarantee that this camera has better visual angle and treats the shooting main part and shoot.
Of course, if the wearable device is a child telephone watch, and the camera of the child telephone watch is integrated with the dial plate, the motor can control the camera on the dial plate to turn up by controlling the dial plate to turn up. It should be noted that the child telephone wristwatch includes a dial, a support plate, and a band, the dial being on the support plate, and both ends of the support plate being connected to the band. When the camera of this children's phone wrist-watch is not used, the both ends of this dial plate all contact with the both ends of layer board, and the screen of dial plate is parallel with the plane at layer board place, and when the dial plate turn-up back, only one end of this dial plate contacts with the one end of layer board, and the screen of dial plate no longer is parallel with the layer board.
In the present embodiment, since the photographing actions are performed at least 2 different photographing angles, respectively, at least 2 images can be obtained. Compared with the method for analyzing only one image, the method for analyzing the plurality of images can obtain a more accurate scene recognition result after the plurality of images are analyzed subsequently.
In some embodiments, the performing the shooting actions at least 2 different shooting angles respectively includes:
and controlling the camera to rotate from the left side of the basketball court to the right side of the basketball court, and shooting a plurality of images.
Correspondingly, step S22 includes:
the method comprises the steps of combining a plurality of shot images into a complete image, then confirming whether a basket exists in a scene or not through image semantic segmentation and an image classification algorithm of deep learning, confirming whether the number of people in a field is close to the number of people in a basketball game or not, and if the basket exists and the number of people in the field is close to the number of people in the basketball game, judging that the recorded scene is the basketball game scene. In order to further improve the accuracy of the obtained judgment result of the recorded scene, whether actions such as running, dribbling, passing, shooting and the like exist in the field personnel is identified through a human skeleton key point technology, and if the actions exist, the current recorded scene is considered as a basketball game scene.
In some embodiments, the video recording method further comprises:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a non-specified mode, directly recording the video.
In this embodiment, when the wearable device determines that the current recording mode is the non-specific mode, the video recording is directly performed without performing a shooting operation or recognizing a scene, so that the video recording can be quickly performed.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
fig. 3 shows a block diagram of a video recording apparatus according to an embodiment of the present application, which corresponds to the video recording method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 3, the video recording apparatus 3 is applied to a wearable device, and includes: a video recording command receiving module 31, a scene recognition module 32, a camera control module 33, and a video recording module 34. Wherein:
and a video recording command receiving module 31, configured to control a camera of the wearable device to perform a shooting action if a video recording command is received.
In some embodiments, the user may issue a video recording command by directly pressing a key in the wearable device for recording video. If the wearable device is equipped with a microphone, the video recording command can be issued by voice.
In some embodiments, if the wearable device establishes a connection (e.g., a bluetooth connection) with another terminal device (e.g., a mobile phone), the user may send a video recording command to the wearable device through the mobile phone.
In some embodiments, if the lens of the camera of the wearable device faces upward (towards the sky) before the wearable device receives the video recording command, the camera is flipped up first and then the shooting action is performed.
And a scene recognition module 32, configured to determine a recording scene according to the image captured by the camera.
And a camera control module 33, configured to adjust a shooting angle of the camera according to the recording scene.
And the video recording module 34 is configured to execute a recording action according to the camera adjusted in shooting angle to obtain a recorded video.
In this embodiment, during and after the adjustment of the shooting angle of the camera, the camera will continue to perform the recording operation, and the recording of the video will not be stopped until the recording ending command is received, so as to obtain the recorded complete video.
In the embodiment of the application, after receiving a video recording command, the wearable device controls the camera to execute a shooting action, performs scene recognition according to an image obtained by the camera, adjusts the shooting angle of the camera according to the recognized recording scene, and executes the recording action according to the camera adjusted by the shooting angle to obtain a recorded video. Because wearing equipment can carry out the adjustment that corresponds to the shooting angle of camera according to the scene of recording that its discernment obtained, consequently, make the camera after the shooting angle adjustment more be fit for recording the scene of recording that wearing equipment is current to be located, thereby guarantee that wearing equipment records the video more accurate. In addition, because the video is recorded by adjusting the shooting angle of the camera, the user does not need to move the limb wearing the wearable device back and forth, so that the picture of the recorded video is prevented from shaking, the stability of the recorded video is improved, and the quality of the recorded video is improved.
In some embodiments, the recorded scene is a ball game scene, and the camera control module 33 includes:
and the camera control strategy selection unit is used for selecting a corresponding camera control strategy according to the ball game scene, and the camera control strategy comprises tracking balls and the contact persons of the balls.
And the shooting angle adjusting unit is used for adjusting the shooting angle of the camera according to the camera control strategy.
In this embodiment, in the process of performing the track shot on the ball and the ball contact person, if it is recognized that the ball contact person runs with the ball, the lens of the camera is controlled to be aligned with the ball contact person or aligned with the basketball. If the fact that a contact person of the ball is to pass is identified, the motion trail of the ball is determined, the moving direction and the distance of the ball are predicted according to the motion trail of the ball, and finally the shooting angle of the camera is adjusted according to the prediction result (the moving direction and the distance) so that the lens of the camera is aligned to the pre-judged direction where the ball is passed, and the ball after being passed can be locked quickly.
In some embodiments, video recording device 3 further comprises:
and the action pre-judging unit of the contact person is used for pre-judging the action of the contact person in the process of recording the video.
In this embodiment, in the process of tracking a ball, if it is detected that the ball is in a hand of a player, each skeletal key point of the player in a plurality of adjacent image frames is obtained through a preset human skeletal key point detection algorithm, and an action to be performed by the player is predicted according to each obtained skeletal key point.
And an imaging parameter adjusting unit configured to adjust an imaging parameter of the camera if the predicted motion of the contacter is the target motion.
Correspondingly, the video recording module 34 is specifically configured to:
and executing a recording action according to the camera with the adjusted shooting angle and the adjusted shooting parameters to obtain a recorded video.
In some embodiments, the ball game scene is a basketball game scene, and the shooting parameter adjusting unit is specifically configured to:
if the target action is a pitching action, reducing the magnification of the camera; and if the target action is a basket-up action, amplifying the magnification of the camera.
In some embodiments, the video recording command receiving module 31 is specifically configured to:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a specified mode, controlling a camera of the wearable device to execute a shooting action.
In some embodiments, the video recording apparatus 3 further comprises:
and the direct recording module is used for directly recording the video if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a non-specified mode.
In some embodiments, the video recording command receiving module 31 is specifically configured to, when controlling the camera of the wearable device to execute a shooting action:
and controlling the camera of the wearable device to be turned up, and respectively executing shooting actions at least 2 different shooting angles.
In this embodiment, accessible motor drive camera is turned up to guarantee that this camera has better visual angle and treats the shooting main part and shoot.
Of course, if the wearable device is a child telephone watch, and the camera of the child telephone watch is integrated with the dial plate, the motor can control the camera on the dial plate to turn up by controlling the dial plate to turn up. It should be noted that the child telephone wristwatch includes a dial, a support plate, and a band, the dial being on the support plate, and both ends of the support plate being connected to the band. When the camera of this children's phone wrist-watch is not used, the both ends of this dial plate all contact with the both ends of layer board, and the screen of dial plate is parallel with the plane at layer board place, and when the dial plate turn-up back, only one end of this dial plate contacts with the one end of layer board, and the screen of dial plate no longer is parallel with the layer board.
In the present embodiment, since the photographing actions are performed at least 2 different photographing angles, respectively, at least 2 images can be obtained. Compared with the method for analyzing only one image, the method for analyzing the plurality of images can obtain a more accurate scene recognition result after the plurality of images are analyzed subsequently.
In some embodiments, the performing the shooting actions at least 2 different shooting angles respectively includes:
and controlling the camera to rotate from the left side of the basketball court to the right side of the basketball court, and shooting a plurality of images.
Correspondingly, the scene recognition module 32 is specifically configured to:
the method comprises the steps of combining a plurality of shot images into a complete image, then confirming whether a basket exists in a scene or not through image semantic segmentation and an image classification algorithm of deep learning, confirming whether the number of people in a field is close to the number of people in a basketball game or not, and if the basket exists and the number of people in the field is close to the number of people in the basketball game, judging that the scene where the scenery to be recorded is located is the basketball game scene. In order to further improve the accuracy of the obtained scene judgment result, whether actions such as running, dribbling, passing, shooting and the like exist in the scene personnel is identified through a human skeleton key point technology, and if the actions exist, the currently recorded scene is considered as a basketball game scene.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example three:
fig. 4 is a schematic structural diagram of a wearable device according to an embodiment of the present application. As shown in fig. 4, the wearing apparatus 4 of this embodiment includes: at least one processor 40 (only one processor is shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, wherein the processor 40 executes the computer program 42 to implement the steps of any of the method embodiments:
if a video recording command is received, controlling a camera of the wearable device to execute a shooting action;
determining a recording scene through an image shot by the camera;
adjusting the shooting angle of the camera according to the recording scene;
and executing a recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
Optionally, the recording scene is a scene of a ball game, and the adjusting the shooting angle of the camera according to the recording scene includes:
selecting a corresponding camera control strategy according to the ball game scene, wherein the camera control strategy comprises tracking balls and the contacters of the balls;
and adjusting the shooting angle of the camera according to the camera control strategy.
Optionally, the method further comprises:
in the process of recording the video, the action of the contacter is judged in advance;
if the predicted action of the contacter is the target action, adjusting the shooting parameters of the camera;
the action of recording is carried out to above-mentioned camera according to shooting angle adjustment after, obtains the video of recording, includes:
and executing a recording action according to the camera with the adjusted shooting angle and the adjusted shooting parameters to obtain a recorded video.
Optionally, the adjusting the shooting parameters of the camera if the predicted motion of the contact person is a target motion includes:
if the target action is a pitching action, reducing the magnification of the camera;
and if the target action is a basket-up action, amplifying the magnification of the camera.
Optionally, if a video recording command is received, controlling the camera of the wearable device to execute a shooting action includes:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a specified mode, controlling a camera of the wearable device to execute a shooting action.
Optionally, the method further comprises:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a non-specified mode, directly recording the video.
Optionally, the above-mentioned camera of controlling the above-mentioned wearing apparatus executes a shooting action, including:
and controlling the camera of the wearable device to be turned up, and respectively executing shooting actions at least 2 different shooting angles.
The wearable device 4 may be a child telephone watch or the like. The wearable device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of wearable device 4, and does not constitute a limitation of wearable device 4, and may include more or less components than those shown, or combine some components, or different components, such as input and output devices, network access devices, etc.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the wearable device 4, such as a hard disk or a memory of the wearable device 4. In other embodiments, memory 41 may also be an external storage device of wearable device 4, such as a plug-in hard disk provided on wearable device 4, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, the memory 41 may also include both an internal storage unit and an external storage device of the wearable device 4. The memory 41 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiment of the present application provides a computer program product, which when running on a wearable device, enables the wearable device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A video recording method is characterized by being applied to wearable equipment and comprising the following steps:
if a video recording command is received, controlling a camera of the wearable device to execute a shooting action;
determining a recording scene through an image shot by the camera;
adjusting the shooting angle of the camera according to the recording scene;
and executing a recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
2. The video recording method of claim 1, wherein the recording scene is a ball game scene, and the adjusting the shooting angle of the camera according to the recording scene comprises:
selecting a corresponding camera control strategy according to the ball game scene, wherein the camera control strategy comprises tracking balls and contactors of the balls;
and adjusting the shooting angle of the camera according to the camera control strategy.
3. The video recording method of claim 2, further comprising:
in the process of recording the video, pre-judging the action of the contacter;
if the pre-judged action of the contacter is the target action, adjusting the shooting parameters of the camera;
the camera after adjusting according to the shooting angle carries out and records the action, obtains the video of recording, includes:
and executing a recording action according to the camera after the shooting angle is adjusted and the shooting parameters after the shooting angle is adjusted, so as to obtain a recorded video.
4. The video recording method of claim 3, wherein the ball game scene is a basketball game scene, and if the predicted action of the contacter is a target action, the adjusting the shooting parameters of the camera comprises:
if the target action is a pitching action, reducing the magnification of the camera;
and if the target movement is taken as a basket-up movement, amplifying the magnification of the camera.
5. The video recording method according to any one of claims 1 to 4, wherein if a video recording command is received, controlling a camera of the wearable device to execute a shooting action includes:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is the designated mode, controlling a camera of the wearable device to execute a shooting action.
6. The video recording method of claim 5, further comprising:
and if a video recording command is received and the video recording command indicates that the current recording mode of the wearable device is a non-specified mode, directly recording the video.
7. The video recording method of claim 5, wherein the controlling the camera of the wearable device to perform a shooting action comprises:
and controlling the camera of the wearable device to be turned up, and respectively executing shooting actions at least 2 different shooting angles.
8. The utility model provides a video recording device which characterized in that is applied to wearing equipment, includes:
the video recording command receiving module is used for controlling a camera of the wearable device to execute a shooting action if a video recording command is received;
the scene recognition module is used for determining a recording scene through an image shot by the camera;
the camera control module is used for adjusting the shooting angle of the camera according to the recording scene;
and the video recording module is used for executing recording action according to the camera adjusted by the shooting angle to obtain a recorded video.
9. A wearable device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202111108701.3A 2021-09-22 2021-09-22 Video recording method and device and wearable device Pending CN113810614A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111108701.3A CN113810614A (en) 2021-09-22 2021-09-22 Video recording method and device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111108701.3A CN113810614A (en) 2021-09-22 2021-09-22 Video recording method and device and wearable device

Publications (1)

Publication Number Publication Date
CN113810614A true CN113810614A (en) 2021-12-17

Family

ID=78939959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111108701.3A Pending CN113810614A (en) 2021-09-22 2021-09-22 Video recording method and device and wearable device

Country Status (1)

Country Link
CN (1) CN113810614A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917959A (en) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 Photographing method and terminal
US20150350606A1 (en) * 2014-05-29 2015-12-03 Abdullah I. Khanfor Automatic object tracking camera
JP2015535971A (en) * 2012-09-12 2015-12-17 サムスン エレクトロニクス カンパニー リミテッド Display device and control method thereof
CN105208282A (en) * 2015-10-10 2015-12-30 上海慧体网络科技有限公司 Method for controlling automatic following shot of camera according to basketball positions on game site
CN108965851A (en) * 2018-04-17 2018-12-07 Oppo广东移动通信有限公司 AR photographic device, AR earphone and AR photographic device application method
CN109739464A (en) * 2018-12-20 2019-05-10 Oppo广东移动通信有限公司 Setting method, device, terminal and the storage medium of audio
CN110602400A (en) * 2019-09-17 2019-12-20 Oppo(重庆)智能科技有限公司 Video shooting method and device and computer readable storage medium
WO2020057353A1 (en) * 2018-09-21 2020-03-26 深圳市九洲电器有限公司 Object tracking method based on high-speed ball, monitoring server, and video monitoring system
CN111756992A (en) * 2019-09-23 2020-10-09 广东小天才科技有限公司 Wearable device follow-up shooting method and wearable device
WO2021139749A1 (en) * 2020-01-09 2021-07-15 上海擎感智能科技有限公司 Method for image acquisition, electronic device, computer storage medium, and vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015535971A (en) * 2012-09-12 2015-12-17 サムスン エレクトロニクス カンパニー リミテッド Display device and control method thereof
US20150350606A1 (en) * 2014-05-29 2015-12-03 Abdullah I. Khanfor Automatic object tracking camera
CN104917959A (en) * 2015-05-19 2015-09-16 广东欧珀移动通信有限公司 Photographing method and terminal
CN105208282A (en) * 2015-10-10 2015-12-30 上海慧体网络科技有限公司 Method for controlling automatic following shot of camera according to basketball positions on game site
CN108965851A (en) * 2018-04-17 2018-12-07 Oppo广东移动通信有限公司 AR photographic device, AR earphone and AR photographic device application method
WO2020057353A1 (en) * 2018-09-21 2020-03-26 深圳市九洲电器有限公司 Object tracking method based on high-speed ball, monitoring server, and video monitoring system
CN109739464A (en) * 2018-12-20 2019-05-10 Oppo广东移动通信有限公司 Setting method, device, terminal and the storage medium of audio
CN110602400A (en) * 2019-09-17 2019-12-20 Oppo(重庆)智能科技有限公司 Video shooting method and device and computer readable storage medium
CN111756992A (en) * 2019-09-23 2020-10-09 广东小天才科技有限公司 Wearable device follow-up shooting method and wearable device
WO2021139749A1 (en) * 2020-01-09 2021-07-15 上海擎感智能科技有限公司 Method for image acquisition, electronic device, computer storage medium, and vehicle

Similar Documents

Publication Publication Date Title
US11860511B2 (en) Image pickup device and method of tracking subject thereof
JP7371227B2 (en) Intelligent video recording method and device
US10965875B2 (en) Query response by a gimbal mounted camera
CN100556079C (en) Camera-control equipment, camera chain, electronic meeting system and video camera control method
US8199208B2 (en) Operation input apparatus, operation input method, and computer readable medium for determining a priority between detected images
US7995794B2 (en) Remote control of an image capturing unit in a portable electronic device
JP4360399B2 (en) Imaging device
US7760995B2 (en) Window display system and window display method
US20100194849A1 (en) Method and a device for controlling the movement of a line of sight, a videoconferencing system, a terminal and a program for implementing said method
CN101124820A (en) Face image correction
US20130202158A1 (en) Image processing device, image processing method, program and recording medium
CN102111541A (en) Image pickup control apparatus, image pickup control method and program
CN101931747A (en) Image processing apparatus and electronic equipment
CN110290299B (en) Imaging method, imaging device, storage medium and electronic equipment
CN109936697B (en) Video shooting target tracking method and device
CN110771175A (en) Video playing speed control method and device and motion camera
CN103227892A (en) Electronic apparatus and photography control method
CN113891145B (en) Super-high definition video preprocessing main visual angle roaming playing system and mobile terminal
JP7267686B2 (en) Imaging device and its control method
CN113810614A (en) Video recording method and device and wearable device
CN114339357A (en) Image acquisition method, image acquisition device and storage medium
CN114500981B (en) Venue target tracking method, device, equipment and medium
CN106454112A (en) Photographing method and system
JP6412222B2 (en) Shooting device, linked shooting method, and linked shooting program
US20220232159A1 (en) Image capturing apparatus, method for controlling the same, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination