WO2021129382A1 - Procédé et dispositif de traitement vidéo - Google Patents

Procédé et dispositif de traitement vidéo Download PDF

Info

Publication number
WO2021129382A1
WO2021129382A1 PCT/CN2020/134645 CN2020134645W WO2021129382A1 WO 2021129382 A1 WO2021129382 A1 WO 2021129382A1 CN 2020134645 W CN2020134645 W CN 2020134645W WO 2021129382 A1 WO2021129382 A1 WO 2021129382A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
identification information
person
camera
trigger signal
Prior art date
Application number
PCT/CN2020/134645
Other languages
English (en)
Chinese (zh)
Inventor
聂兰龙
Original Assignee
青岛千眼飞凤信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛千眼飞凤信息技术有限公司 filed Critical 青岛千眼飞凤信息技术有限公司
Publication of WO2021129382A1 publication Critical patent/WO2021129382A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • This application relates to the technical field of video processing, and in particular to a video processing method and device.
  • the way to obtain video is mainly to use cameras, mobile phones, video cameras, and drones to obtain them independently. There is a lack of methods and means to provide high-quality video services.
  • High-quality video requires not only video capture equipment with good imaging effects, but also auxiliary shooting means.
  • the video captured using a traditional fixed camera has a rigid background and lacks dynamic effects, which makes it impossible to achieve an ideal presentation.
  • photographers usually need to set up rail cars and shooting cantilevers to achieve the ideal effect.
  • the embodiments of the present application provide a video processing method and device to at least solve the technical problem of low reliability of video processing methods in the related art.
  • a video processing method including: receiving a trigger signal, wherein the trigger signal is used to trigger a camera device to shoot, and the camera device includes a driving device and a camera.
  • the driving device is used for driving the camera in the camera device to move; the camera is used for video shooting during the movement; acquiring the video captured by the camera under the trigger of the trigger signal; extracting from the video
  • the identity information corresponding to the person in the video is stored, and the corresponding relationship between the identity information and the video is stored.
  • the video processing method before receiving the trigger signal, further includes: sensing that a person is present in the shooting range of the camera device through a sensor provided on the camera device; Induction, and send out the trigger signal.
  • the senor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the video processing method before receiving the trigger signal, further includes: receiving a user's operation of a switch in the camera device; and in response to the operation, sending the trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the video processing method before receiving the trigger signal, further includes: receiving a user's operation on a software interface; in response to the operation, sending the trigger signal to the camera device via a network.
  • the video processing method before receiving the user's operation on the software interface, further includes: acquiring the identification information of the camera device by scanning a graphical code set on the camera device; and according to the identification information The operations that can be performed on the camera device are displayed on the software interface.
  • the method further includes: acquiring Extract the identification information of the video, and search the saved videos for one or more videos corresponding to the identification information of the video to be extracted; display the one or more videos to the video to be extracted The user corresponding to the identification information.
  • the video processing method before receiving the user's operation on the software interface, further includes: acquiring the geographic location information of the handheld device when the software interface displays the human handheld device; The geographic location information displays on the software interface the camera devices that can be controlled by the person within a predetermined range from the handheld device, and the operations that can be performed on the camera device.
  • extracting the identification information corresponding to the person in the video from the video includes: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as the identity for identifying the person Identification information; extracting from the network trigger signal the identification information that identifies the person; acquiring the identification information of the video to be extracted includes: acquiring the radio frequency signal and extracting the sensed radio frequency identification information, and determining the radio frequency identification information as The identification information of the person; the identification information that identifies the person is extracted from the network trigger signal.
  • extracting the identification information corresponding to the person in the video from the video includes: recognizing attachments on the person and/or biological characteristics of the person from the person;
  • the feature information of the attachment and/or the feature information of the biological feature is used as the identification information for identifying the person;
  • obtaining the identification information of the video to be extracted includes: obtaining the feature information and/or of the attachment of the person
  • the identification information of the biological characteristics of the person, and the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of the person;
  • the attachment includes at least one of the following One: clothing, accessories, hand-held items;
  • the biological characteristics include at least one of the following: facial characteristics, posture characteristics; the attachment is used to uniquely identify the person in a predetermined area.
  • extracting the identity information corresponding to the persons in the video from the video includes: identifying each of the multiple persons The attachments and/or biological characteristics of a person; the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
  • storing the corresponding relationship between the identification information of each of the plurality of people and the video includes: determining that the identification information of each of the plurality of people is identified in the video Time node; use the time node as the time label of the identification information of each of the plurality of people; use the identification information of each of the plurality of people as the identity of each of the plurality of people The corresponding relationship between the time tag added by the identification information and the video is saved.
  • the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, following the tracking movement of the target object Trajectory.
  • the movement of the camera is at least one of the following: orbital movement and rotation movement.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • a video processing device including: a first receiving unit, configured to receive a trigger signal, wherein the trigger signal is used to trigger a camera device to shoot, and the camera device It includes a driving device and a camera, the driving device is used to drive the camera in the camera device to move; the camera is used for video shooting during the movement; the first acquisition unit is used to acquire the camera in the The video taken under the trigger of the trigger signal; the extraction unit is used to extract the identity information corresponding to the person in the video from the video, and save the corresponding relationship between the identity information and the video .
  • the video processing device further includes: a sensing unit, configured to sense that a person is present in the shooting range of the camera device through a sensor provided on the camera device before the trigger signal is received; A response unit is used for sending out the trigger signal in response to the sensing of the sensor.
  • the senor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the video processing device further includes: a second receiving unit, configured to receive a user's operation of a switch in the camera device before receiving the trigger signal; and a second response unit, configured to respond to The operation sends the trigger signal.
  • a second receiving unit configured to receive a user's operation of a switch in the camera device before receiving the trigger signal
  • a second response unit configured to respond to The operation sends the trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the video processing device further includes: a third receiving unit, configured to receive a user's operation on the software interface before receiving the trigger signal; a third response unit, configured to respond to the operation, The trigger signal is sent to the camera device via the network.
  • a third receiving unit configured to receive a user's operation on the software interface before receiving the trigger signal
  • a third response unit configured to respond to the operation, The trigger signal is sent to the camera device via the network.
  • the device further includes: a third obtaining unit, configured to obtain the identification information of the camera device by scanning a graphical code set on the camera device before receiving the user's operation on the software interface;
  • the display unit is configured to display the operations that can be performed on the camera device on the software interface according to the identification information.
  • the video processing apparatus further includes: a fourth acquiring unit, configured to acquire the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information of the device; a display unit for displaying on the software interface the camera device that can be controlled by the person within a predetermined range of the handheld device according to the geographic location information, and the The operation performed by the camera.
  • a fourth acquiring unit configured to acquire the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information of the device
  • a display unit for displaying on the software interface the camera device that can be controlled by the person within a predetermined range of the handheld device according to the geographic location information, and the The operation performed by the camera.
  • the device further includes: a second acquiring unit, configured to extract from the video the identity information corresponding to the person in the video, and to correlate the identity information with the video After the relationship is saved, the identification information of the video to be extracted is obtained, and one or more videos corresponding to the identification information of the video to be extracted are searched in the saved videos; the display unit is used to display the one or more Or multiple videos are displayed to the user corresponding to the identification information of the video to be extracted.
  • a second acquiring unit configured to extract from the video the identity information corresponding to the person in the video, and to correlate the identity information with the video
  • the identification information of the video to be extracted is obtained, and one or more videos corresponding to the identification information of the video to be extracted are searched in the saved videos
  • the display unit is used to display the one or more Or multiple videos are displayed to the user corresponding to the identification information of the video to be extracted.
  • the extraction unit is configured to identify the attachments on the person and/or the biological characteristics of the person from the person; and combine the characteristic information of the attachments and/or the biological characteristics of the person
  • the characteristic information of the person is used as the identification information for identifying the person
  • the second acquiring unit is used to acquire the characteristic information of the person’s attachments and/or the identity information of the person’s biological characteristics
  • the The characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristic is determined to be the identification information of the person;
  • the attachment includes at least one of the following: clothing, accessories, and handheld objects;
  • the biological characteristics include At least one of the following: facial features, posture features; the attachment is used to uniquely identify the person in a predetermined area.
  • the extraction unit is further configured to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identification information for identifying the person; extract the identification information from the network trigger signal.
  • the identification information of the person; the second acquisition unit is further configured to acquire the radio frequency identification information sensed from the radio frequency signal, and determine the radio frequency identification information as the identification information of the person; from the network
  • the trigger signal is extracted to obtain the identification information that identifies the person.
  • the extraction unit includes: a first identification module, configured to identify attachments and/or attachments on each person from the multiple people Or biological characteristics; the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
  • a first identification module configured to identify attachments and/or attachments on each person from the multiple people Or biological characteristics
  • the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and The corresponding relationship between the identification information of each of the multiple persons and the video is stored.
  • the first saving module includes: a first determining submodule, configured to determine the time node at which the identification information of each of the plurality of people is recognized in the video; and a second determining submodule , Used to use the time node as the time label of the identification information of each of the plurality of people; a save sub-module for storing the identification information of each of the plurality of people and the identification information of each of the plurality of people The corresponding relationship between the time tag added to each individual's identification information and the video is stored.
  • the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, following the tracking movement of the target object Trajectory.
  • the movement of the camera is at least one of the following: orbital movement and rotation movement.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • a storage medium includes a stored program, wherein the program executes the video processing method described in any one of the foregoing.
  • a processor is also provided, the processor is configured to run a program, wherein the video processing method described in any one of the above is executed when the program is running.
  • the trigger signal is received, where the trigger signal is used to trigger the camera device to shoot.
  • the camera device includes a driving device and a camera.
  • the driving device is used to drive the camera in the camera to move; the camera is used to move the camera.
  • the processing method achieves the purpose of pre-saving the video containing the user and the identification information of the person identified from the video, and querying the user's video from the pre-stored video library according to the user's identification information, to achieve The technical effect of improving the reliability of the video processing method is improved, the user experience is improved, and the technical problem of low reliability of the video processing method in related technologies is solved.
  • Fig. 1 is a flowchart of a video processing method according to Embodiment 1 of the present application
  • Fig. 2 is a schematic diagram of an application scenario of a photographing device according to the first embodiment of the present application
  • Fig. 3 is a schematic diagram of a photographing device according to the first embodiment of the present application.
  • Fig. 4 is a schematic diagram 1 of an application scenario of a photographing device according to a second embodiment of the present application.
  • FIG. 5 is a second schematic diagram of an application scenario of the photographing device according to the second embodiment of the present application.
  • Fig. 6 is a schematic diagram of a photographing device according to a second embodiment of the present application.
  • Fig. 7 is a schematic diagram 1 of an application scenario of a photographing device according to a third embodiment of the present application.
  • FIG. 8 is a schematic diagram of a second application scenario of a photographing device according to the third embodiment of the present application.
  • Fig. 9 is a schematic diagram 1 of an application scenario of a photographing device according to a fourth embodiment of the present application.
  • FIG. 10 is a schematic diagram of a second application scenario of a photographing device according to the fourth embodiment of the present application.
  • Fig. 11 is a schematic diagram of a photographing device according to a fourth embodiment of the present application.
  • FIG. 12 is a schematic diagram of an application scenario of a photographing device according to Embodiment 5 of the present application.
  • Fig. 13 is a schematic diagram of an application scenario of a photographing device according to the sixth embodiment of the present application.
  • Fig. 14 is a schematic diagram of a photographing device according to a sixth embodiment of the present application.
  • FIG. 15 is a schematic diagram of an application scenario of a photographing device according to Embodiment 7 of the present application.
  • Fig. 16 is a schematic diagram of a photographing device according to a seventh embodiment of the present application.
  • FIG. 17 is a schematic diagram of an application scenario of a photographing device according to Embodiment 8 of the present application.
  • Fig. 18 is a schematic diagram of a photographing device according to the eighth embodiment of the present application.
  • Fig. 19 is a schematic diagram of an application scenario of a photographing device according to the ninth embodiment of the present application.
  • Fig. 20 is a schematic diagram of a photographing device according to the ninth embodiment of the present application.
  • FIG. 21 is a schematic diagram of an application scenario of a photographing device according to the tenth embodiment of the present application.
  • Fig. 22 is a schematic diagram of a photographing device according to the tenth embodiment of the present application.
  • Fig. 23 is a schematic diagram of an application scenario of a photographing device according to the eleventh embodiment of the present application.
  • Fig. 24 is a schematic diagram of a photographing device according to the eleventh embodiment of the present application.
  • FIG. 25 is a schematic diagram of an application scenario of a photographing device according to the twelfth embodiment of the present application.
  • Fig. 26 is a schematic diagram of a photographing device according to the twelfth embodiment of the present application.
  • FIG. 27 is a schematic diagram of an application scenario of a photographing device according to the thirteenth embodiment of the present application.
  • Fig. 28 is a schematic diagram of a photographing device according to a thirteenth embodiment of the present application.
  • FIG. 29 is a schematic diagram of an application scenario of a photographing device according to the fourteenth embodiment of the present application.
  • Fig. 30 is a schematic diagram of a photographing device according to a fourteenth embodiment of the present application.
  • FIG. 31 is a schematic diagram of an application scenario of a photographing device according to the fifteenth embodiment of the present application.
  • Fig. 32 is a schematic diagram of a photographing device according to the fifteenth embodiment of the present application.
  • Fig. 33 is a schematic diagram of an application scenario of a photographing device according to the sixteenth embodiment of the present application.
  • Fig. 34 is a schematic diagram of a photographing device according to a sixteenth embodiment of the present application.
  • Fig. 35 is a schematic diagram 1 of an application scenario of a photographing device according to the seventeenth embodiment of the present application.
  • Fig. 36 is a schematic diagram 2 of an application scenario of a photographing device according to the seventeenth embodiment of the present application.
  • Fig. 37 is a schematic diagram of a photographing device according to a seventeenth embodiment of the present application.
  • Fig. 38 is a schematic diagram of a video processing device according to the eighteenth embodiment of the present application.
  • a method embodiment of a video processing method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions, and, Although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
  • Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application. As shown in Fig. 1, the video processing method includes the following steps:
  • Step S102 Receive a trigger signal, where the trigger signal is used to trigger the camera device to shoot, the camera device includes a drive device and a camera, the drive device is used to drive the camera in the camera device to move; the camera is used to perform video during the movement Shooting.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • mechanical driving can be realized by rollers, ropes, conveyor belts, screw rods, etc.
  • electromagnetic driving can be realized by linear motors, magnetic levitation, etc.
  • pressure driving can be realized by fluid pressure such as hydraulic or pneumatic, for example, water energy, Wind energy, hydraulic pump function, air pump function, etc.
  • the driving device drives the camera in the camera device to move
  • it can be realized by installing a certain movement track, where the movement track of the camera can include at least one of the following: a reciprocating movement track between a predetermined starting point and a predetermined ending point, and a predetermined path
  • the cyclic movement trajectory is based on the movement trajectory designed by a predetermined programming program and follows the tracking movement trajectory of the target object.
  • the movement trajectory can be a reciprocating movement between a predetermined start point and an end point, can be a cyclic movement according to a predetermined path, can be a movement that executes a predetermined programming program, can be a tracking movement following the target person, or a combination of the above movement modes;
  • Mixed execution of movement modes for example, a predetermined program is executed after the target person is tracked for a fixed period of time, and the target person is tracked for a fixed period of time after the execution of the predetermined program is completed.
  • the movement mode of the camera is at least one of the following: orbital movement and rotational movement.
  • the track can be a sliding track, a telescopic track, or a rope track; in a rotary movement, it can be rotated by a single joint, or can be rotated by multiple joints, and the rotating mechanism can be a rocker arm or a runner; in addition, ,
  • the movement method can also be a mixed movement of orbital movement and rotary movement.
  • the video processing method may further include: receiving a user's operation on the software interface; in response to the operation, sending a trigger signal to the camera device via the network.
  • the video processing method may further include: acquiring the identification information of the camera device by scanning a graphical code set on the camera device; The operation performed by the device.
  • the video processing method may further include: acquiring the geographic location information of the handheld device when the software interface displays the human handheld device; and displaying the geographic location information on the software interface based on the geographic location information Displays the camera devices that can be controlled by a person within a predetermined range from the handheld device, and the operations that can be performed on the camera device.
  • the data information obtained by the network and sent by the portable terminal APP contains the user identification and confirms the use of the video capture device.
  • the data information for confirming the use of the video capture device can be generated by the user entering the code of the video capture device in the APP application and confirming the use, it can be generated by the user using the APP application to scan the QR code of the video capture device, and it can be portable
  • the terminal APP is generated based on the current location information of the portable terminal and the location information of the video capture device reaching a predetermined condition.
  • Step S104 Acquire a video captured by the camera under the trigger of the trigger signal.
  • the video processing method may further include: sensing that someone is present in the shooting range of the camera device through a sensor set on the camera device; Trigger signal.
  • the aforementioned sensor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the senor used to trigger the camera device may also be an infrared sensing unit that responds to infrared rays of the human body, a radio frequency sensing unit set to respond to radio frequency signals, or a radar responding to moving objects. Detection unit.
  • the radio frequency sensing unit may be a radio frequency identification RFID card.
  • the video processing method may further include: receiving a user's operation of a switch in the camera device; and in response to the operation, sending a trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the sensor used to trigger the camera device may also be: a photoelectric switch unit that responds to the passage of a human body, a key switch unit that responds to pressing by the human body, or a touch switch unit that responds to human touch. .
  • the feature trigger unit that is used to trigger the start of the camera device can also be a feature trigger unit that responds to the person's gesture information, mouth shape information, and body shape information. It can be an instruction switch that responds to a network signal, for example, an instruction issued by a portable terminal APP through the network. Information, instruction information can be generated by the operation of the nearby video capture device selected in the APP, it can be entered in the APP of the code of the video capture device or the QR code of the device is scanned by the APP, and it can be that the APP obtains the positioning coordinates of the terminal in the mobile The video capture device is generated within the area set by the video capture device.
  • Step S106 Extract the identity information corresponding to the person in the video, and save the corresponding relationship between the identity information and the video.
  • extracting the identification information corresponding to the person in the video may include: identifying attachments and/or biological characteristics of the person from the person; combining the characteristic information and/or biological characteristics of the attachment As the identification information used to identify the person.
  • the acquired video and the identification information identified from the video are saved correspondingly, and the support system can perform corresponding extraction operations on the video according to the corresponding saving, so that visitors can easily obtain video clips containing themselves. .
  • the acquired video and the identification information identified from the video are stored correspondingly.
  • the video file may be mapped to the database and stored corresponding to the identification identification, or the summary information of the identification identification may be included in the video file.
  • a trigger signal can be received, where the trigger signal is used to trigger the camera device to shoot.
  • the camera device includes a driving device and a camera, and the driving device is used to drive the camera in the camera to move; For video shooting in the process of moving; to obtain the video captured by the camera under the trigger signal; to extract the identity information corresponding to the person in the video, and to save the corresponding relationship between the identity information and the video, realizing the advance The purpose of correspondingly saving the video containing the user and the identification information of the person identified from the video.
  • the video is identified to obtain the identification information of the person in the video, and the video and the identified identification information are stored correspondingly, so that when the user requests to obtain his own video
  • a video that is consistent with the user’s identity information can be called from the video library where the video is stored, so that the video containing the user can be pre-matched with the identity information of the person identified from the video.
  • the purpose of saving and obtaining the user's video from the pre-stored video library according to the user's identity identification information is achieved, which achieves the technical effect of improving the reliability of the video processing method and improves the user experience.
  • the video processing method in the present application solves the technical problem of low reliability of the video processing method in the related art.
  • the video processing method further includes: obtaining the video to be extracted And search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos; display one or more videos to the user corresponding to the identification information of the video to be extracted.
  • acquiring the identification information of the video to be extracted may include: acquiring the characteristic information of the attachment of a person and/or the identification information of the biological characteristic of the person, and combining the characteristic information of the attachment and/or the characteristic information corresponding to the biological characteristic Identification information determined as a person; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial features, posture characteristics; the attachment is used to uniquely identify a person in a predetermined area.
  • extracting the identification information corresponding to the people in the video may include: identifying attachments and/or biological characteristics of each person from multiple people; The characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics are determined as the identification information of each of the multiple people, and the corresponding relationship between the identification information of each of the multiple people and the video is determined save.
  • storing the corresponding relationship between the identification information of each of the multiple people and the video includes: determining the time node at which the identification information of each of the multiple people is recognized in the video ; The time node is used as the time label of the identification information of each of the multiple people; the identification information of each of the multiple people and the time label added for the identification information of each of the multiple people are corresponding to the video The relationship is saved.
  • extracting the identification information corresponding to the person in the video from the video includes: extracting the sensed radio frequency identification information from the radio frequency signal; using the radio frequency identification information as the identification information for the person Identification information; the identification information that identifies the person is extracted from the network trigger signal; the identification information of the video to be extracted includes: extracting the sensed radio frequency identification information from the radio frequency signal, and determining the radio frequency identification information as the identity of the person Information; the identification information that identifies the person is extracted from the network trigger signal.
  • extracting the identification information corresponding to the person in the video from the video can be implemented by an identification unit, which can be a radio frequency identification unit carried by a person, such as an RFID card.
  • extracting the identification information corresponding to the person in the video from the video can be realized by the identification unit, which can be the identification information obtained by the network, for example, the network obtained by the portable terminal APP issued by the user contains the user Identification and confirmation of the use of the video capture device data information.
  • the data information for confirming the use of the video capture device can be generated by the user entering the code of the video capture device in the APP application and confirming the use, it can be generated by the user using the APP application to scan the QR code of the video capture device, and it can be portable
  • the terminal APP is generated based on the current location information of the portable terminal and the location information of the video capture device reaching a predetermined condition.
  • Figure 2 is a schematic diagram of the application scenario of the camera according to the first embodiment of the application
  • Figure 3 is a schematic diagram of the camera according to the first embodiment of the application, as shown in Figures 2 and 3, in this embodiment, a convex track is used to reciprocate Movement
  • the driving method is roller drive
  • the movement track is straight back and forth.
  • the trigger method adopted is the human body infrared sensor switch
  • the track of the camera device (including the track and the video capture car) is convex. track.
  • the proximity switch senses the limit iron block and stops the movement in the current direction. Start moving in the opposite direction. Move to the limit iron block at the other end, in response to the proximity switch sensing signal, stop the movement in the current direction, and start the movement in the opposite direction.
  • the video captures the uninterrupted linear motion of the sports car back and forth between the two limit iron blocks.
  • the video capture sports car responds to the signal sensed by the infrared sensor switch of the human body, and starts the uninterrupted round-trip linear motion when the human body infrared is sensed, and executes the operation of acquiring the video; if the human body infrared is not sensed, it stops The uninterrupted reciprocating linear movement.
  • the video capture vehicle (or called a sports car) is equipped with running wheels and provides wireless connection.
  • Proximity switches are arranged at both ends of the sports car, and carbon brushes are arranged at the front and rear of the sports car; the two ends of the track are arranged with limit iron posts that can be sensed by the proximity switch, and conductive cables are arranged on both sides of the angle steel track to provide electricity for the sports car.
  • the video segment can be generated in the following way: extract the video segment from the continuously captured video stream, take the time point when the RFID radio frequency signal is sensed as the first frame of the video segment, and take the time point when the RFID radio frequency signal disappears as sensed
  • the end frame of the video clip The sensed RFID radio frequency identification is saved in the extracted video file abstract.
  • the RFID radio frequency card triggers the sports car to start linear reciprocating movement through the human body infrared sensor switch and obtain the video operation, which can avoid the device from performing invalid operations, and can automatically capture the tourists' mobile video at any time. Behind the booth of the exhibition hall, the video captures the sports car reciprocating on the corner track, continuously shooting the same frame video of the visitors and the exhibits.
  • FIG. 4 is a schematic diagram of the first application scenario of a photographing device according to the second embodiment of the application
  • FIG. 5 is a schematic diagram two of the application scenario of the photographing device according to the second embodiment of the present application
  • FIG. 6 is a schematic diagram of the photographing device according to the second embodiment of the present application, as shown in FIG. 4.
  • a concave circular track is used, its driving mode is a timing belt wheel, and its motion track is a curve cycle-character tracking.
  • the trigger mode is facial recognition, and the movement form is a concave track.
  • a video capture sports car is started, and the target person is tracked from the starting point to the end point, and after reaching the end point, it queues with other video capture sports cars to wait for the servo.
  • the video capture car (or called a sports car) uses a DC miniature gear reducer motor, a synchronous wheel, and a toothed synchronous belt.
  • the video capture car can be connected wirelessly.
  • the first frame of the video clip starts when the video capture sports car starts to move, and the video capture sports car moves to the bottom end of the track to generate the end frame.
  • the video clip does not contain the capture information from the top of the track to the end.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • facial feature recognition is used. By recognizing tourists with registered facial features, the camera can be triggered to start the mobile vehicle to perform the mobile acquisition video operation, which can autonomously track and shoot, and achieve a presentation effect that cannot be achieved by conventional shooting methods.
  • the first video captures the sports car to automatically track and shoot.
  • the second video captures the sports car in a ready position.
  • On the other side of the track there are four video-capture sports cars waiting for the servo.
  • the video captures the sports car in a concave track.
  • the sports car is powered by two electric carbon brushes.
  • the sports car is equipped with a DC miniature gear reducer motor.
  • the synchronous wheel on the output shaft of the reducer meshes with the toothed synchronous belt pasted in the track.
  • Two conductive cables are pasted on one side wall of the track to provide electricity for the video capture sports car through the carbon brush of the sports car.
  • FIG. 7 is a schematic diagram of the first application scenario of the photographing device according to the third embodiment of the present application
  • FIG. 8 is a schematic diagram of the second application scenario of the photographing device according to the third embodiment of the present application, as shown in FIG. 7 and FIG.
  • the video captures the sports car on the ceiling of the T-shaped show stage performance hall, which moves circularly along the curve of the snake-shaped slit track.
  • the slit-shaped snake track is adopted.
  • the driving mode is a roller, and the movement track is a curve loop.
  • the example can be applied to the T-shaped runway.
  • a master control switch is used for triggering, and the track can be a ceiling cracked track.
  • the video capture sports car cyclically moves along the track curve; after the main control switch is opened, the video capture sports car stops moving.
  • the recognition method in this embodiment is facial feature recognition.
  • the video segment is extracted from the continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video segment, and the first frame time point is The fixed time point afterwards is the end frame.
  • the ingested video fragment is analyzed for the degree of facial expression richness change, and the video fragment that meets the determination of the degree of facial expression richness change is stored corresponding to the facial feature recognition identifier.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 9 is a schematic diagram of the first application scenario of a photographing device according to the fourth embodiment of the application
  • FIG. 10 is a schematic diagram of the second application scenario of the photographing device according to the fourth embodiment of the present application
  • FIG. 11 is a schematic diagram of the photographing device according to the fourth embodiment of the present application, as shown in FIG. 9.
  • the video capture device is located at the lowest end of the hexagonal column in the non-working state
  • the target person stands on the lifting platform
  • the video capture device is dragged to the corner of the hexagonal column by a drawstring.
  • Top The movable frame is arranged on the periphery of the hexagonal column to drive the video capture device to move.
  • This embodiment uses a rod-shaped octahedral column, a pull rope, and a way of rotating up and down.
  • This embodiment can be applied to urban parks.
  • the trigger mode of this embodiment is gravity downward pressure
  • the movement mode is a vertical pole track
  • the driving mode is Activities frame.
  • the pull rope between the lifting platform and the video capture movable frame drives the video capture movable frame to rotate and move. For example, if the lifting platform is pressed and dropped by gravity, the video capture movable frame performs rotation and upward movement, for example, the lifting platform Cancel the gravity downward pressure rise, then the video capture activity frame will perform a rotating downward movement.
  • an external network instruction identifier can be used, for example, an APP registered user scans the code of the mobile video acquisition device or enters the device number (APP registration identifier).
  • the way of generating the video can be as follows: a proximity switch is provided on the pole, and the movable frame is in the lowest position in response to the video capture.
  • the video capture frame leaves the lowest position, and responds to the signal from the proximity switch to start capturing video and generates the first frame of the video clip; the video capture frame returns to the lowest position and responds to the signal from the proximity switch to generate the end frame of the video clip, and Stop video capture.
  • the received network trigger identifier is stored in the generated video file summary.
  • the portable terminal APP uses the portable terminal APP to scan the code to confirm the mobile video acquisition device, and use this device to perform mobile acquisition video operations.
  • the shooting angle of view can be gradually increased from the horizontal angle of view containing the user to the high-altitude angle of view of the panoramic view of the scenic area, which cannot be achieved by conventional shooting methods.
  • the rendering effect is not limited to:
  • FIG. 12 is a schematic diagram of an application scenario of a photographing device according to Embodiment 5 of the present application.
  • this embodiment can be applied to a climbing ladder, and a rope track is provided on one side of the climbing ladder.
  • the rope puller drags the video capture device to slide along the rope track.
  • This embodiment can be referred to as a rope type.
  • a rope puller is used as a drive, a fixed trajectory is adopted, the trigger mode is an RFID radio frequency card, and the movement form is a rope track.
  • Both ends of the rope track and the mobile video acquisition device are equipped with RFID readers, which can read passive RFID radio frequency cards that meet the ISO/IEC18000-6 standard 860-960MHz air interface parameters within 10m.
  • the video capture unit is connected to the pull rope, and a rope puller is arranged at the top of the rope track.
  • the rope puller drags the mobile video acquisition device to perform reciprocating movement at both ends of the rope track, and pulls it for a fixed period of time in response to the time point when the RFID reader cannot receive the radio frequency signal.
  • the rope device stops dragging the mobile video capture device.
  • the rope puller reduces the drag speed; in response to the RFID reader set on the mobile video capture device cannot receive Radio frequency information identification, the rope puller resumes towing speed.
  • an RFID radio frequency card is used as identity recognition.
  • video capture is performed.
  • the video capture is executed after a fixed period of time and the video file is generated.
  • the file name of the generated video file is mapped in a database, and the video file in the database corresponds to a plurality of RFID radio frequency information containing time tags.
  • the RFID radio frequency information including the time tag records the sensed RFID radio frequency information and the corresponding time point information, the time point information is the time point information when the radio frequency information is received, and the time point information is not received Time point information of the radio frequency information.
  • the mobile video capture device can capture the video of users within 10m, and reduce the speed of movement when moving to the vicinity of the user to enhance the presentation effect of shooting for the user.
  • FIG. 13 is a schematic diagram of an application scenario of a photographing device according to Embodiment 6 of the present application
  • FIG. 14 is a schematic diagram of a photographing device according to Embodiment 6 of the present application.
  • this embodiment can be applied to trampoline sports.
  • the video capture device tracks the target person on the trampoline moving up and down to shoot.
  • Linear motor induction coils are arranged on both sides of the video capture device, which respectively penetrate two tubular magnetic shafts.
  • This embodiment can be called a linear motor type, and its driving mode is a tubular high-speed servo linear motor, and its trajectory is linear reciprocating.
  • the movement form can be a tubular track.
  • the tubular high-speed servo linear motor responds to the video image portrait analysis of the background server. After the character enters the target area, the magnetic axis linear motor moves to the position corresponding to the height of the character's head, and responds to the video image portrait analysis Track the height of the person's head to track the movement.
  • the trigger method used may be an external network instruction method.
  • the portable terminal APP detects that the positioning coordinates are located in a predetermined area and sends a trigger instruction to the mobile video acquisition device in the predetermined area. If the positioning coordinates of the portable terminal are located in the setting area of the mobile video acquisition device, the APP registration identifier can be acquired, which is used as the identification identifier. Extract video clips from the continuously captured video stream, take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point The fixed time point is the end frame of the video clip.
  • Slow motion effect processing is performed on the extracted video segment, and the extracted video segment is saved corresponding to the facial feature recognition identifier.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the network trigger identifier.
  • the high-speed servo linear motor drives the high-speed camera to track the movement in response to the moving state of the person, and achieve the effect that cannot be presented in conventional shooting through the slow-motion effect.
  • the seventh to tenth embodiments and the thirteenth to seventeenth all use facial recognition as the identity recognition method
  • the eleventh embodiment uses an RFID radio frequency card as the identity recognition method
  • the twelfth embodiment The image feature recognition of the name badge is used as the identification method.
  • FIG. 15 is a schematic diagram of the application scenario of the photographing device according to the seventh embodiment of the present application.
  • FIG. 16 is a schematic diagram of the photographing device according to the seventh embodiment of the present application.
  • this embodiment can be called a magnetic levitation track, which can be applied In the Science and Technology Museum, in this embodiment, the target person touches and clicks on the touch screen to trigger the magnetic levitation video capture of the sports car to move back and forth in a straight line.
  • the lower part of the magnetic levitation video capture sports car is provided with two skateboards, and the skateboard has a built-in magnet to make the video capture sports car levitate on the track.
  • the driving mode is a magnet and an induction coil
  • the trajectory is a straight back and forth
  • the trigger mode is a touch switch
  • the movement mode is a magnetic levitation track.
  • the magnetic levitation video capture sports car is equipped with magnets on both sides of the skateboard, and the magnets with the same magnetism as the sports car magnet are arranged on both sides of the track, and the magnetic levitation video captures the sports car floating above the track.
  • a horizontal magnetic column is arranged in the middle of the magnetic levitation video capture sports car, and a horizontal coil is arranged in the middle of the track to attract the horizontal magnetic column. The horizontal coil in the middle of the track drives the horizontal magnetic column of the sports car to move.
  • the magnetic levitation video captures the sports car to perform reciprocating movement on the track according to a predetermined programmed trajectory.
  • a fixed number of linear reciprocating movements are initiated, and the operation of acquiring the video is performed.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the extracted video clips are saved corresponding to the facial feature recognition identifiers.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the human body infrared sensor switch triggers the sports car to start linear reciprocating movement and obtain video operations, which can prevent the device from performing invalid operations and automatically capture tourists on mobile at any time.
  • FIG. 17 is a schematic diagram of an application scenario of a photographing device according to Embodiment 8 of the present application
  • FIG. 18 is a schematic diagram of a photographing device according to Embodiment 8 of the present application.
  • the lifting screw sliding table tracks the target person for moving and shooting.
  • the servo motor drives the lead screw to rotate and drives the video capture device to move up and down.
  • the driving mode of this embodiment is a lead screw and a sliding table, its trajectory is a tracking trajectory, its trigger mode is a portrait analysis, and its movement form is a linear track.
  • the lead screw sliding table responds to the video image portrait analysis of the back-end server, the sliding table moves to the position corresponding to the height of the person's head after the person enters the target area, and tracks the height of the person's head in response to the video image portrait analysis.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the extracted video clips are saved corresponding to the facial feature recognition identifiers.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 19 is a schematic diagram of an application scenario of a photographing device according to Embodiment 9 of the present application
  • FIG. 20 is a schematic diagram of a photographing device according to Embodiment 9 of the present application.
  • this embodiment can be referred to as the implementation of the hanging wheel mode.
  • this embodiment can be applied to a passageway in a scenic spot, where a mobile video capture device installed on the upper part of a street light pole moves up and down to shoot a target person.
  • the upper part of the street light pole is equipped with a spherical radar sensor device, a rope puller, and a hanging wheel video capture device.
  • the driving mode of this embodiment is a rope puller, the trajectory is up and down, and the trigger mode is that the radar sensor switch moves in the form of a sling reciprocation.
  • the video capture device moves up and down along the sling Moving; when the radar sensor switch detects no moving objects, the video capture device stops moving.
  • a video clip is extracted from a continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video clip, and a fixed time point after the first frame time point is the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers.
  • the file summary of the generated video file corresponds to multiple facial feature recognition identifiers containing time stamps.
  • the facial feature recognition identifier containing the time tag records the recognized facial features and the corresponding time point information.
  • the time point information is the time point information at which the facial feature is recognized, and the time point information is that the face cannot be recognized The time point information of the feature.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 21 is a schematic diagram of an application scenario of a photographing device according to Embodiment 10 of the present application.
  • FIG. 22 is a schematic diagram of a photographing device according to Embodiment 10 of the present application.
  • this embodiment can be called a conveyor belt type. It is applied to the hot spots of crowded scenic spots. In the hot spots of scenic spots, the video capture device on the conveyor belt continuously moves the people in the shooting area.
  • the conveyor belt is driven by a geared motor, which drives multiple video capture devices to shoot.
  • the driving mode of this embodiment is a reduction motor and a runner, the trajectory is a belt loop, the trigger mode is a master control switch, and the movement mode is a conveyor belt loop.
  • a video clip is extracted from a continuously captured video stream, and a fixed time point before the time point when the facial feature is recognized is the first frame of the video clip, and a fixed time point after the first frame time point is the end frame. Extract video clips for different facial feature recognition subjects, and save the extracted video clips corresponding to the facial feature recognition identifiers.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 23 is a schematic diagram of an application scenario of a photographing device according to Embodiment 11 of the present application
  • FIG. 24 is a schematic diagram of a photographing device according to Embodiment 11 of the present application.
  • this embodiment may be called a scissor lift type
  • This embodiment can be applied to temporary scenic spots.
  • the video capture device is temporarily placed at the temporary scenic spot to shoot the target person from the bottom up.
  • the hydraulic station drives the oil cylinder to drive the lifting device of the scissor fork structure to reciprocate up and down.
  • the driving mode of this embodiment is oil pressure
  • the trajectory is a programmed trajectory
  • the trigger mode is a master control switch
  • the movement form is a retractable lifting structure, such as a scissor fork lifting.
  • the fixed long time point after the RFID reader fails to receive the radio frequency information identification is the end frame of the video clip.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the RFID radio frequency information identification.
  • the mobile video capture device can capture the video of users within 10m, and can complete the autonomous shooting angle from the horizontal angle of the tourist to the high-altitude angle of view of the entire scenic spot, enhancing the user-oriented shooting Present the effect.
  • the mobile video acquisition device can be dragged to a temporary location at any time to perform the task of video capture according to the temporary needs of the scenic spot.
  • FIG. 25 is a schematic diagram of an application scenario of a photographing device according to the twelfth embodiment of the present application
  • FIG. 26 is a schematic diagram of a photographing device according to the twelfth embodiment of the present application.
  • this embodiment may be called a multi-section rod
  • this embodiment can be applied to the entrance of a scenic spot.
  • the target person passes through the photoelectric switch sensing area, and in response to the photoelectric switch sensing signal, the cylinder performs an upward movement to drive the video capture device to shoot the target person.
  • a travel switch is arranged at the top of the vertical pole, and the travel switch signal triggers the cylinder to perform a downward movement.
  • This embodiment is a power air system composed of an air compressor, an oil-water separator, an electromagnetic reversing valve, and an air cylinder, which drives the video capture device to move up and down.
  • the driving mode is an air cylinder
  • the trajectory is reciprocating
  • the trigger mode is a photoelectric switch
  • the movement mode is a multi-section air cylinder.
  • the mobile video acquisition device is set on the top of the multi-section cylinder.
  • the pneumatic pipeline electromagnetic reversing valve controls the expansion and contraction of the multi-section cylinder.
  • the electromagnetic reversing valve responds to the off signal of the optoelectronic switch to control the elasticity of the cylinder; the electromagnetic reversing valve In response to the closing signal of the limit switch, the cylinder is controlled to contract.
  • the mobile video acquisition device starts to capture the first frame of the video clip at the time when the through-beam photoelectric switch is turned off, and the time point after the time when the travel switch is closed is the end of the captured video clip. frame.
  • the generated video clips are saved correspondingly to the feature identification marks of the badge.
  • the image feature identification of the badge identified in the file summary of the generated video file.
  • the multi-section cylinder is triggered to stretch out and start to execute the mobile acquisition video operation, and the autonomous shooting angle can be gradually increased from the horizontal angle of view containing tourists to the high-altitude angle of view of the panoramic view of the scenic area, realizing a presentation that cannot be achieved by conventional shooting methods effect.
  • FIG. 27 is a schematic diagram of an application scenario of a photographing device according to Embodiment 13 of the present application.
  • FIG. 28 is a schematic diagram of a photographing device according to Embodiment 13 of the present application.
  • this embodiment may be called a two-way slide .
  • This embodiment can be applied to an art gallery.
  • the mobile video acquisition device is driven by a two-way slide to track and shoot.
  • a power supply cable and a rack are arranged on the first rail, which respectively mesh with the power-taking carbon brush and the output shaft gear of the first servo motor.
  • the second track is arranged on the first sliding table, and the second track is arranged with a rack gear which meshes with the gear of the output shaft of the second servo motor.
  • a connection line is provided between the first servo motor and the second servo motor, and is wrapped in the protective drag chain.
  • the video capture device is arranged on the second sliding table.
  • the driving mode of this embodiment is a synchronous belt and a synchronous wheel, the trajectory is a tracking-programming trajectory, the trigger mode is a portrait analysis, and the movement form is a sliding rail.
  • the mobile video acquisition device is configured to program multiple sets of predetermined trajectories corresponding to the multi-angle shooting of the artwork.
  • the mobile video acquisition device responds to the portrait analysis of the video image of the back-end server, tracks and acquires the activity video of the person for a fixed period of time and executes the predetermined trajectory programming of the artwork that the person looks at or the artwork near the person, and continues to respond to the background after the execution of the predetermined trajectory programming is completed.
  • the server's video image portrait analysis tracks people for a fixed period of time.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip. Extract video segments from different facial feature recognition subjects, and save the extracted video segments corresponding to the facial feature recognition identifiers.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • FIG. 29 is a schematic diagram of an application scenario of a photographing device according to Embodiment 14 of the present application.
  • FIG. 30 is a schematic diagram of a photographing device according to Embodiment 14 of the present application.
  • the video capture device is set at the bottom end of the water storage bucket. If the water level in the water storage bucket reaches a predetermined water level, the video capture device will move upward as the center of gravity of the water storage bucket shifts; After the water in the bucket is emptied, the video capture device moves down with the return of the water bucket.
  • the shooting state may include: a bucket, a rotating shaft, a water valve, a video capture device, and a counterweight.
  • the driving mode is water energy
  • the trajectory is water pressure back and forth
  • the trigger mode is water flow
  • the movement mode is When a single rotating shaft rotates and there is water flow, the water storage bucket performs repeated swings relative to the rotating shaft, and the mobile video acquisition device at one end of the water storage bucket moves up and down in an arc with the rotating shaft as the center.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the extracted video clips are saved corresponding to the facial feature recognition identifiers.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view containing tourists to the high-altitude angle of view of the overview of the panoramic view area, realizing a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 31 is a schematic diagram of an application scenario of a photographing device according to Embodiment 15 of the present application.
  • FIG. 32 is a schematic diagram of a photographing device according to Embodiment 15 of the present application. As shown in FIG. 31 and FIG. 32, this embodiment may be called a turntable type. This embodiment can be applied to a water park, where multiple video capture devices are arranged on a rotating water wheel, and multiple video capture devices perform continuous shooting at the same time.
  • the photographing device in this embodiment may include: a waterwheel, a water valve, a video capture device, and a counterweight.
  • the driving mode of this embodiment is water energy
  • the trajectory is cyclic
  • the trigger mode is water flow
  • the movement mode is single-rotation shaft rotation.
  • the water storage bucket phase drives the runner to perform circular movement, and a plurality of mobile video acquisition devices arranged on the runner move circularly with respect to the rotating shaft.
  • Extract video clips from the continuously captured video stream take the time point in the video image analysis that the person enters the target area after a fixed period of time as the first frame of the video clip, and take the time point in the video image analysis that the person leaves the target area before the time point
  • the fixed time point is the end frame of the video clip.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 33 is a schematic diagram of an application scenario of a photographing device according to the sixteenth embodiment of the present application
  • FIG. 34 is a schematic diagram of a photographing device according to the sixteenth embodiment of the present application.
  • this embodiment may be referred to as a multi-axis pan
  • the arm this embodiment can be used for a spiral ladder, and the video capture device tracks the movement of the target person walking on the spiral ladder.
  • the first stepping motor drives the first rocker arm to rotate
  • the second stepping motor drives the second rocker arm to rotate.
  • the second rocker arm rotating shaft is arranged on the first rocker arm.
  • the driving mode is a stepping motor
  • the trajectory is a tracking trajectory
  • the trigger mode is a portrait analysis
  • the movement mode is an early joint rotation.
  • the turntable and the rocker arm respond to the portrait analysis of the video image of the background server, and the rocker turntable makes the video acquisition device track the movement of the head position of the person after the person enters the target area.
  • the mobile video acquisition device In response to the portrait analysis of the video image by the back-end server, the mobile video acquisition device extracts a fixed time point before the time point when the portrait appears in the predetermined area as the first frame of the video clip, and the time point when the portrait leaves the predetermined area as the video clip End frame. Save the generated video clip corresponding to the facial feature recognition identifier.
  • the file name of the generated video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the mobile video acquisition device of this embodiment tracks the target person for shooting, and achieves a presentation effect that cannot be achieved by conventional shooting means.
  • FIG. 35 is a schematic diagram of the first application scenario of a photographing device according to the seventeenth embodiment of the application
  • FIG. 36 is a schematic diagram of the second application scenario of the photographing device according to the seventeenth embodiment of the present application
  • FIG. 37 is a schematic diagram of the photographing device according to the seventeenth embodiment of the present application
  • this embodiment can be called a sliding table rocker combination.
  • This embodiment can be applied to the welcome lane of a scenic spot.
  • a welcome device is installed on the welcome lane of the scenic spot. Waving the flagpole, the welcoming device is also equipped with a video capture device.
  • the first cylinder controls the telescopic movement of the scissor and fork mechanism on the sliding table
  • the second cylinder controls the rocker arm to swing up and down.
  • the photographing device in this embodiment may include: an air pump, an oil-water separator, a first cylinder, a first electromagnetic directional valve, a second electromagnetic directional valve, a second cylinder, a third electromagnetic directional valve, and a fourth electromagnetic directional valve .
  • the driving mode of this embodiment is an air cylinder
  • the trajectory is a random and disordered trajectory
  • the trigger mode is a master control switch
  • the movement form is a linear reciprocating and single-joint rotation of the sliding table.
  • the slide table is set to the expansion control of the first cylinder
  • the rocker arm is set to the expansion control of the second cylinder.
  • the first solenoid reversing valve controls the extension of the first cylinder
  • the second solenoid reversing valve controls the contraction of the first cylinder
  • the first solenoid reversing valve and the second solenoid reversing valve are set within 5-20 seconds Randomly select a certain time as the two reversing valve commands to alternately execute and control the operating time of the first cylinder.
  • the first electromagnetic reversing valve ventilates the cylinder, and the second electromagnetic reversing valve exhausts outwards, so that the sliding table moves to one end; after 8 seconds, the second time sequence is entered; random selection 17 seconds is the second time sequence.
  • the first solenoid reversing valve and the second solenoid reversing valve execute commands interchange, the first solenoid reversing valve exhausts outwards, and the second solenoid reversing valve ventilates the first cylinder to achieve sliding
  • the stage moves in the opposite direction, and the third sequence is performed after 17 seconds.
  • the third solenoid reversing valve controls the extension of the second cylinder
  • the fourth solenoid reversing valve controls the contraction of the second cylinder
  • the third solenoid reversing valve and the fourth solenoid reversing valve are set within 3-12 seconds Randomly select a certain time as the two reversing valve commands to alternately execute and control the operating time of the second cylinder. For example, randomly select 5 seconds as the first time sequence, the third electromagnetic reversing valve ventilates the cylinder, and the fourth electromagnetic reversing valve exhausts outwards to realize the upward movement of the rocker arm; after 5 seconds, it enters the second time charge; random selection 9 seconds is the second time sequence.
  • the third solenoid reversing valve and the fourth solenoid reversing valve execute commands interchange, the third solenoid reversing valve exhausts outward, and the fourth solenoid reversing valve ventilates the second cylinder to achieve shaking.
  • the arm moves down, and the third sequence is performed after 9 seconds.
  • the video capture device follows the sliding table and rocker arm to move randomly; after the main control switch is turned off, the video capture device stops moving.
  • the file name of the extracted video file is mapped in the database, and the video file in the database corresponds to the facial feature recognition identifier.
  • the autonomous shooting angle of view can be gradually increased from the horizontal angle of view that includes tourists to the high-altitude angle of view that overlooks the entire scenery of the scenic area, and achieves a presentation effect that cannot be achieved by conventional shooting methods.
  • FIG. 38 is a schematic diagram of the video processing device according to the embodiment of the present application, as shown in FIG. 38 ,
  • the video processing device includes: a first receiving unit 3801, a first acquiring unit 3803, and an extracting unit 3805.
  • the video processing device will be described in detail below.
  • the first receiving unit 3801 is used to receive a trigger signal, where the trigger signal is used to trigger the camera device to shoot, the camera device includes a driving device and a camera, the driving device is used to drive the camera in the camera to move; the camera is used to move Video shooting during the process.
  • the first acquisition unit 3803 is configured to acquire a video captured by the camera under the trigger of a trigger signal.
  • the extraction unit 3805 is configured to extract the identity information corresponding to the person in the video, and save the corresponding relationship between the identity information and the video.
  • first receiving unit 3801, first obtaining unit 3803, and extracting unit 3805 correspond to steps S102 to S106 in the first embodiment, and the above-mentioned units and corresponding steps implement the same examples and application scenarios. But it is not limited to the content disclosed in the first embodiment. It should be noted that, as a part of the device, the above-mentioned units can be executed in a computer system such as a set of computer-executable instructions.
  • the first receiving unit can be used to receive the trigger signal, where the trigger signal is used to trigger the camera device to shoot, and the camera device includes a driving device and a camera, and the driving device is used to drive the camera device.
  • the camera moves in the mobile; the camera is used for video shooting during the movement; then the first acquisition unit is used to acquire the video captured by the camera under the trigger of the trigger signal; then the extraction unit is used to extract the identity information corresponding to the person in the video , And save the corresponding relationship between the identity information and the video.
  • the video processing device in this application it is realized that the video containing the user and the identification information of the person recognized from the video are correspondingly saved in advance, and the user is obtained from the pre-stored video library according to the user's identification information.
  • the purpose of the video is to achieve the technical effect of improving the reliability of the video processing method, improve the user experience, and then solve the technical problem of low reliability of the video processing method in the related technology.
  • the video processing device further includes: a sensing unit configured to sense that someone is present in the shooting range of the camera device through a sensor provided on the camera device before receiving the trigger signal; The unit is used to send out a trigger signal in response to the sensing of the sensor.
  • the senor is at least one of the following: an infrared sensing unit, a radio frequency sensing unit, and a radar detection unit.
  • the video processing device further includes: a second receiving unit, configured to receive a user's operation of a switch in the camera device before receiving the trigger signal; and a second response unit, configured to respond to Operate to issue a trigger signal.
  • the switch is at least one of the following: a key switch unit, a touch switch unit, and a photoelectric switch unit.
  • the video processing device further includes: a third receiving unit, configured to receive a user's operation on the software interface before receiving the trigger signal; and a third response unit, configured to respond to the operation, Send a trigger signal to the camera via the network.
  • the device further includes: a third obtaining unit, configured to obtain the identification information of the camera device by scanning a graphical code set on the camera device before receiving the user's operation on the software interface;
  • the display unit is used to display the operations that can be performed on the camera device on the software interface according to the identification information.
  • the video processing apparatus further includes: a fourth obtaining unit, configured to obtain the information of the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information; a display unit for displaying on the software interface the camera devices that can be controlled by a person within a predetermined range from the handheld device and the operations that can be performed on the camera device on the software interface according to the geographic location information.
  • a fourth obtaining unit configured to obtain the information of the handheld device when the software interface displays the human handheld device before receiving the user's operation on the software interface Geographic location information
  • a display unit for displaying on the software interface the camera devices that can be controlled by a person within a predetermined range from the handheld device and the operations that can be performed on the camera device on the software interface according to the geographic location information.
  • the video processing device further includes: a second acquiring unit, configured to extract the identity information corresponding to the person in the video from the video, and after storing the corresponding relationship between the identity information and the video, acquire the to-be-extracted The identification information of the video, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos; the display unit is used to display one or more videos to the identification of the video to be extracted The user corresponding to the message.
  • a second acquiring unit configured to extract the identity information corresponding to the person in the video from the video, and after storing the corresponding relationship between the identity information and the video, acquire the to-be-extracted The identification information of the video, and search for one or more videos corresponding to the identification information of the video to be extracted in the saved videos
  • the display unit is used to display one or more videos to the identification of the video to be extracted The user corresponding to the message.
  • the extraction unit is used to identify the attachments and/or the biological characteristics of the person from the person; the characteristic information of the attachments and/or the characteristic information of the biological characteristics are used as the Identify the identity information of the person; the second acquisition unit is used to acquire the characteristic information of the attachments of the person and/or the identity information of the biological characteristics of the person, and the characteristic information of the attachments and/or the characteristic information corresponding to the biological characteristics Identification information determined as a person; wherein the attachment includes at least one of the following: clothing, accessories, and hand-held objects; the biological characteristics include at least one of the following: facial features, posture characteristics; the attachment is used to uniquely identify a person in a predetermined area.
  • the extraction unit is also used to extract the sensed radio frequency identification information from the radio frequency signal; use the radio frequency identification information as the identity information for identifying the person; extract the identity mark that identifies the person from the network trigger signal Information; the second acquisition unit is also used to extract the sensed radio frequency identification information from the radio frequency signal, and determine the radio frequency identification information as the person's identity information; extract the person's identity information from the network trigger signal.
  • the extraction unit when the number of people in the video is multiple, includes: a first recognition module, configured to recognize attachments and/or attachments on each person from the multiple people Or biological characteristics; the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and the identification information of each of the multiple persons The corresponding relationship between a person's identification information and the video is saved.
  • a first recognition module configured to recognize attachments and/or attachments on each person from the multiple people Or biological characteristics
  • the first storage module is used to determine the characteristic information of the attachments on each person and/or the characteristic information corresponding to the biological characteristics as the identification information of each of the multiple persons, and the identification information of each of the multiple persons The corresponding relationship between a person's identification information and the video is saved.
  • the first saving module includes: a first determining sub-module, configured to determine the time node at which the identification information of each of the multiple people is recognized in the video; and a second determining sub-module , Used to use the time node as the time label of the identification information of each of multiple people; save sub-module, used to identify the identification information of each of multiple people and the identification information of each of multiple people The corresponding relationship between the added time label and the video is saved.
  • the movement trajectory of the camera includes at least one of the following: a reciprocating movement trajectory between a predetermined starting point and a predetermined end point, a cyclic movement trajectory of a predetermined path, a movement trajectory designed based on a predetermined programming program, and following the target The tracking movement trajectory of the object.
  • the movement of the camera is at least one of the following: orbital movement and rotational movement.
  • the driving mode of the driving device is at least one of the following: mechanical driving, electromagnetic driving, and pressure driving.
  • a storage medium includes a stored program, wherein the program executes any one of the above-mentioned video processing methods.
  • a processor which is configured to run a program, wherein the video processing method of any one of the above is executed when the program is running.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units may be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de traitement vidéo. Le procédé comprend les étapes consistant à : recevoir un signal de déclenchement, le signal de déclenchement étant utilisé pour déclencher un dispositif de caméra pour qu'il effectue une photographie, le dispositif de caméra comprenant un dispositif d'entraînement et une caméra, le dispositif d'entraînement étant utilisé pour entraîner la caméra dans le dispositif de caméra à se déplacer, et la caméra étant utilisée pour effectuer une capture vidéo pendant le mouvement ; obtenir une vidéo capturée par la caméra suite au déclenchement du signal de déclenchement ; extraire les informations d'identification correspondant à une personne dans la vidéo, et sauvegarder une relation correspondante entre les informations d'identification et la vidéo ; obtenir les informations d'identification d'une vidéo à extraire, et rechercher dans la vidéo sauvegardée une ou plusieurs vidéos correspondant aux informations d'identification de la vidéo à extraire ; et afficher la ou les vidéos à l'utilisateur correspondant aux informations d'identification de la vidéo à extraire. La présente invention permet de résoudre le problème technique, dans l'état de la technique, lié à une faible fiabilité de procédés de traitement vidéo.
PCT/CN2020/134645 2019-12-27 2020-12-08 Procédé et dispositif de traitement vidéo WO2021129382A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911380674.8 2019-12-27
CN201911380674.8A CN113055586A (zh) 2019-12-27 2019-12-27 视频处理方法及装置

Publications (1)

Publication Number Publication Date
WO2021129382A1 true WO2021129382A1 (fr) 2021-07-01

Family

ID=76506813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134645 WO2021129382A1 (fr) 2019-12-27 2020-12-08 Procédé et dispositif de traitement vidéo

Country Status (2)

Country Link
CN (1) CN113055586A (fr)
WO (1) WO2021129382A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113617698A (zh) * 2021-08-20 2021-11-09 杭州海康机器人技术有限公司 一种包裹追溯方法、装置、系统、电子设备及存储介质
CN114247125A (zh) * 2021-12-29 2022-03-29 尚道科技(深圳)有限公司 基于识别模组的运动区域内成绩数据记录方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105159959A (zh) * 2015-08-20 2015-12-16 广东欧珀移动通信有限公司 图像文件的处理方法和系统
CN105279273A (zh) * 2015-10-28 2016-01-27 广东欧珀移动通信有限公司 照片分类方法及装置
WO2016209517A1 (fr) * 2015-06-25 2016-12-29 Intel Corporation Procédés d'enregistrement ou de suppression de clip vidéo
CN106412429A (zh) * 2016-09-30 2017-02-15 深圳前海弘稼科技有限公司 一种基于温室大棚的图像处理方法及装置
CN106559654A (zh) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 一种人脸识别监控采集系统及其控制方法
CN109905595A (zh) * 2018-06-20 2019-06-18 成都市喜爱科技有限公司 一种拍摄及播放的方法、装置、设备及介质
CN111368724A (zh) * 2020-03-03 2020-07-03 成都市喜爱科技有限公司 游乐影像生成方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581062B (zh) * 2014-12-26 2019-03-15 中通服公众信息产业股份有限公司 一种身份信息与视频联动的视频监控方法及系统
DE102017001879A1 (de) * 2017-02-27 2018-08-30 Giesecke+Devrient Mobile Security Gmbh Verfahren zur Verifizierung der Identität eines Nutzers
CN108388672B (zh) * 2018-03-22 2020-11-10 西安艾润物联网技术服务有限责任公司 视频的查找方法、装置及计算机可读存储介质
CN110532432A (zh) * 2019-08-21 2019-12-03 深圳供电局有限公司 一种人物轨迹检索方法及其系统、计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016209517A1 (fr) * 2015-06-25 2016-12-29 Intel Corporation Procédés d'enregistrement ou de suppression de clip vidéo
CN105159959A (zh) * 2015-08-20 2015-12-16 广东欧珀移动通信有限公司 图像文件的处理方法和系统
CN105279273A (zh) * 2015-10-28 2016-01-27 广东欧珀移动通信有限公司 照片分类方法及装置
CN106412429A (zh) * 2016-09-30 2017-02-15 深圳前海弘稼科技有限公司 一种基于温室大棚的图像处理方法及装置
CN106559654A (zh) * 2016-11-18 2017-04-05 广州炫智电子科技有限公司 一种人脸识别监控采集系统及其控制方法
CN109905595A (zh) * 2018-06-20 2019-06-18 成都市喜爱科技有限公司 一种拍摄及播放的方法、装置、设备及介质
CN111368724A (zh) * 2020-03-03 2020-07-03 成都市喜爱科技有限公司 游乐影像生成方法及系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113617698A (zh) * 2021-08-20 2021-11-09 杭州海康机器人技术有限公司 一种包裹追溯方法、装置、系统、电子设备及存储介质
CN113617698B (zh) * 2021-08-20 2022-12-06 杭州海康机器人股份有限公司 一种包裹追溯方法、装置、系统、电子设备及存储介质
CN114247125A (zh) * 2021-12-29 2022-03-29 尚道科技(深圳)有限公司 基于识别模组的运动区域内成绩数据记录方法及系统

Also Published As

Publication number Publication date
CN113055586A (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
WO2021129382A1 (fr) Procédé et dispositif de traitement vidéo
CN106843460B (zh) 基于多摄像头的多目标位置捕获定位系统及方法
CN202008670U (zh) 一种可调节角度的人脸识别设备
US9224037B2 (en) Apparatus and method for controlling presentation of information toward human object
CN107660039B (zh) 一种识别动态手势的灯具控制系统
CN110533553B (zh) 服务提供方法和装置
US10970525B2 (en) Systems and methods for user detection and recognition
CN107027014A (zh) 一种动向智能投影系统及其方法
CN101072332A (zh) 一种自动跟踪活动目标进行拍摄的方法
CN108053523A (zh) 一种高效智慧访客管理服务系统及其工作方法
CN101520838A (zh) 自动跟踪和自动变焦的虹膜图像获取方法
CN202949571U (zh) 展馆Wi-Fi定位自动导览服务系统
CN205656672U (zh) 一种3d成像试衣间
CN105335750A (zh) 客户身份识别系统及识别方法
CN201904848U (zh) 一种基于多摄像头的跟踪摄像装置
CN106514671A (zh) 一种智能门童机器人
CN103802111A (zh) 下棋机器人
CN102867172B (zh) 一种人眼定位方法、系统及电子设备
CN107471215A (zh) 一种基于信息识别的排队座椅机器人控制系统
CN108875716A (zh) 一种人体运动轨迹追踪检测摄像系统
CN205408002U (zh) 一种景区全方位自动化拍摄装置
CN105894573A (zh) 3d成像试衣间及其成像方法
CN205845105U (zh) 一种用于虚拟看房的虚拟现实空间移动定位装置
CN208117873U (zh) 一种门岗机器人
CN206907094U (zh) 智能多维人员信息采集系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906912

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20906912

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/12/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20906912

Country of ref document: EP

Kind code of ref document: A1