Detailed Description
The term "coupled" as used throughout this specification, including the claims, may refer to any direct or indirect connection. For example, if a first device couples (or connects) to a second device, it should be construed that the first device may be directly connected to the second device or the first device may be indirectly connected to the second device through some other device or some connection means. Further, wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts. Elements/components/steps in different embodiments using the same reference numerals or using the same terms may be referred to one another in relation to the description.
Fig. 1 is a block diagram of a photographing system 100 according to an embodiment of the invention. The photographing system 100 includes a first rfid reader 120, a processing device 130, and one (or more) cameras 110. The camera 110 is disposed in a road running path of a road running activity. The camera 110 may take a runner(s) to obtain a photograph(s). The first rfid reader 120 is disposed in a road running path of the road running activity. The first rfid reader 120 may read rfid tags worn by one (or more) runners to obtain time information of the runners passing through a specific location (where the first rfid reader 120 is located on the running path). The present embodiment does not limit the implementation and algorithm of sensing/reading rfid tags. For example, the first rfid reader 120 may sense/read the rfid tag by using known techniques, which are not described herein.
The processing device 130 may receive the pictures from the camera 110. The processing device 130 may perform a face recognition (face recognition) operation on the photos to group the photos into one (or more) photo groups. For example, the processing device 130 may group photos with face a as photo group a and group photos with face B as photo group B. The embodiment does not limit the implementation and algorithm of the face recognition operation. For example, the processing device 130 may perform the face recognition operation by using a known technique, which is not described herein.
Fig. 2 is a flowchart illustrating an operation method of a photographing system according to an embodiment of the invention. Referring to fig. 1 and 2, the camera 110 may photograph one (or more) runners in step S210 to obtain one (or more) pictures. These photographs may be transmitted to the processing device 130. The processing device 130 may receive the pictures from the camera 110. The processing device 130 may perform a face recognition operation on the photos to group the photos into one (or more) photo groups in step S220. The first rfid reader 120 may read the rfid tags worn by one (or more) runners in step S230 to obtain the time information of the runners passing through a specific location (where the first rfid reader 120 is located on the running path). In step S240, the processing device 130 may establish a corresponding relationship between the photo groups and the runners according to the time information provided by the first rfid reader 120.
In some embodiments, the processing apparatus 130 may actively transmit at least one corresponding photo in the photo groups to a mobile communication device (e.g., a mobile phone or a smart watch) of a corresponding runner of the runners according to the correspondence obtained in step S240. For example, for each runner (referred to as a corresponding runner herein) in the runner, the processing device 130 can know that the photos of one photo group (referred to as a corresponding photo group herein) in the photo groups have a corresponding runner according to the correspondence obtained in step S240, so that the processing device 130 can actively transmit one or more (or all) photos of the corresponding photo group to the smart watch (or mobile phone) of the corresponding runner. Therefore, the corresponding runner can receive the photo of the runner in the running activity instantly.
In other embodiments, the processing device 130 may provide the network connection information of one of the photo groups to the mobile communication device (e.g., a mobile phone or a smart watch) of one of the runners according to the corresponding relationship obtained in step S240. For example, for each runner (referred to as a corresponding runner herein) in the runner, the processing device 130 can know that the photo of one photo group (referred to as a corresponding photo group herein) in the photo groups has a corresponding runner according to the correspondence obtained in step S240, so that the processing device 130 can actively transmit the network link information (e.g., website) of the corresponding photo group to the smart watch (or mobile phone) of the corresponding runner. Therefore, the corresponding runner can operate the smart watch (or mobile phone) to connect to the web page/website corresponding to the network connection information during the running activity, so as to watch the photo of the runner in real time.
FIG. 3 is a schematic diagram of the photographing system 100 of FIG. 1 applied to road running activities. The camera 110 and the first rfid reader 120 are disposed in a running path 310 of a running activity. In the application scenario shown in FIG. 3, the camera 110 is disposed in the vicinity of the first RFID reader 120. There are a plurality of runners on the running path 310. When the runner 320 passes through the first rfid reader 120 at the location 311 of the running path 310, the first rfid reader 120 may read the rfid tag worn by the runner 320 to obtain the time information of the runner 320 passing through the location 311, and the camera 110 may take a picture of the runner 320 to obtain a picture (hereinafter, referred to as a to-be-confirmed picture). The processing device 130 may compare the time information provided by the first rfid reader 120 (the time the runner 320 passed the location 311) with the time information provided by the camera 110 (the time the photo to be confirmed was taken). When the time difference between the time when the runner 320 passes through the position 311 and the time when the to-be-confirmed picture is taken is smaller than a certain preset error tolerance range, the processing device 130 may automatically establish the corresponding relationship between the to-be-confirmed picture and the runner 320.
In some cases, multiple photographs of the same runner may be grouped into two or more photograph groups. For example, assume that photo group a was once labeled as a photo group of runner 320 at a previous point in time, but photo group B has not yet been established to correspond to any runner (in fact, photo group B is also a photo of the same runner 320). When the camera 110 captures a to-be-confirmed photo of the runner 320 passing through the position 311, the processing device 130 may group the to-be-confirmed photo into the photo group B according to the result of the face recognition operation. Once the processing device 130 establishes the correspondence between the photo to be confirmed and the runner 320 according to the time information provided by the first rfid reader 120, the processing device 130 may automatically mark the photo group B as a photo group of the runner 320. Because the photo group a and the photo group B are both labeled as photo groups of the same runner 320, the processing device 130 can integrate the photo group a and the photo group B into one photo group.
In some cases, there may be multiple groups of photographs that have not yet been correlated to any runner. The processing device 130 may define a time range according to the capturing time of any one of the photo groups (hereinafter, referred to as a target photo group). According to the time range, the processing device 130 may select one or more candidate runners from the runners, wherein the candidate runners fall within the time range at the location 311 of the running path 310 through the first rfid reader 120. When only one candidate runner falls within the time range through the position 311, the processing device 130 may automatically establish a correspondence between the target photo group and the candidate runner. When the time when there are multiple candidate runners passing through the position 311 falls within the time range, the processing device 130 may have different handling methods, which are described in the following different embodiments.
In some embodiments, the processing device 130 may provide a user operation interface to allow the operator to establish the correspondence between the photo groups and the candidate runners. The processing device 130 may provide one or more candidate runners of all runners whose time passing through the location 311 falls within the time range to the operator. The operator can watch the face photos of the target photo group through the user operation interface and select a corresponding runner from the candidate runners so as to establish the corresponding relationship between the target photo group and the corresponding runner.
In still other embodiments, before the location 311, another camera (not shown, which may be analogized with reference to the camera 110) and another rfid reader (not shown, which may be analogized with reference to the first rfid reader 120) may be disposed at another location (not shown, referred to herein as a previous location) in the road path 310. By analogy with the related description of fig. 3, when the runner 320 passes through the previous position, the processing device 130 may obtain the photo of the runner 320 at the previous position, and establish the corresponding relationship between the photo of the previous position and the runner 320. At location 311, there may be multiple groups of photos that have not yet been mapped to any runner. The processing device 130 may utilize the correspondence between the photos in the previous position and the runner 320 to compare the photos in the previous position with the photo group, so as to automatically select a target photo group from the plurality of photo groups in the position 311, thereby establishing the correspondence between the target photo group and the runner 320.
In other embodiments, the processing device 130 may obtain a photograph of the face of the candidate runner that passed the location 311 within the time frame (referred to herein as the entry data photograph) from the active entry data file. The processing device 130 may compare the entry data photos of the candidate runners with the target photo group, so as to automatically select a corresponding runner from the candidate runners, thereby establishing a corresponding relationship between the target photo group and the corresponding runner. Accordingly, the processing device 130 may establish a correspondence between the photo groups and the runners.
FIG. 4 is a schematic diagram of another scenario in which the photographing system 100 of FIG. 1 is applied to road running activities. In the embodiment shown in fig. 4, the photographing system 100 further includes a second rfid reader 140, and the camera 110 includes a plurality of cameras (e.g., the camera 110_1 and the camera 110_ n shown in fig. 4). The cameras 110_1 to 110_ n, the first RFID reader 120, and the second RFID reader 140 are disposed in a running path 410 of a running activity. The first RFID reader 120 and the second RFID reader 140 are respectively disposed at different positions (e.g., the position 411 and the position 412 shown in FIG. 4) in the running path 410. The first rfid reader 120 can read the rfid tag worn by the runner on the running path 410, and obtain the time information of the runner at the location 411 of the running path 410 via the first rfid reader 120. The second rfid reader 140 can read the rfid tag worn by the runner on the running path 410, and obtain the time information of the runner at the location 412 of the running path 410 through the second rfid reader 140.
The path 410 between the first RFID reader 120 and the second RFID reader 140 is a path segment 413, i.e., the path 410 between a position 411 and a position 412. The cameras 110_ 1-110 _ n are disposed at different positions in the path segment 413. The cameras 110_1 ~ 110_ n can take one (or more) runner(s) at different positions to obtain one (or more) photo(s).
The processing device 130 can perform a face recognition operation on the photos provided by the cameras 110_ 1-110 _ n to group the photos into one (or more) photo groups. The processing device 130 may define a time range according to the time of taking a picture of any one of the photo groups (referred to herein as the target photo group). Based on the time information provided by the first RFID reader 120 and the second RFID reader 140, the processing device 130 may know that some of the runners (referred to herein as non-candidate runners) are not located in the path segment 413 during the time range. Accordingly, the processing device 130 may filter out the non-candidate runners from the runners to obtain candidate runners.
In some embodiments, the operator may view a photograph of a face of a target photo group to select a corresponding runner from the candidate runners to establish a correspondence between the target photo group and the corresponding runner. In other embodiments, the processing device 130 may obtain a photograph of the face of the candidate runner from the active entry data file (referred to herein as an entry data photograph). The processing device 130 may compare the entry data photos of the pairs of candidate runners with the target photo group so as to automatically select a corresponding runner from the candidate runners, thereby establishing a corresponding relationship between the target photo group and the corresponding runner. Accordingly, the processing device 130 may establish a correspondence between the photo groups and the runners.
Fig. 5 is a timing diagram illustrating photographing performed by the photographing system 100 shown in fig. 1. The horizontal axis shown in fig. 5 represents the time of the road running activity. According to the above descriptions of the embodiments, it is assumed that the photographing system 100 has grouped a plurality of photos taken in the time interval T1 and established the corresponding relationship between the group of photos taken in the time interval T1 and the runner. Taking fig. 5 as an example, the photographing system 100 has marked the group of photos 511 taken during the time interval T1 as photos of the runner 521. After the time interval T1 is ended, the time interval T2 is entered. Multiple runners (including runner 521) may pass through the radio frequency identification reader and camera at the next location. The camera in the next position may take multiple photographs during time interval T2. The processing device 130 can select from the plurality of photos taken in the time interval T2 by using the time information of the runner 521 through the next position. For example, by means of the face recognition operation, the processing device 130 may select the candidate photo 512 and the candidate photo 513 from the plurality of photos taken in the time interval T2, wherein the runner's face in the candidate photo 512 and the candidate photo 513 is determined to be similar to the runner's face in the photo group 511, and the shooting time of the candidate photo 512 and the candidate photo 513 matches the time information of the runner 521 passing through the next position. The photographing system 100 may display the photos of the runner 521 that have been marked in the time interval T1 to the operator for viewing (or automatically perform an analysis of facial similarity) to compare the candidate photos 512 with the candidate photos 513 for the time interval T2. In accordance with runner characteristics (e.g., face, clothing color, etc.) in the candidate photograph, the operator (or automatically by the processing device 130) may confirm that the candidate photograph 512 is a photograph of the runner 521 and the candidate photograph 513 is not a photograph of the runner 521. Thus, the processing device 130 can eliminate the candidate photos 513 and add the candidate photos 512 to the photo group 511.
It is noted that, in different application scenarios, the related functions of the processing device 130 may be implemented as software, firmware or hardware by using a general programming language (e.g. C or C + +), a hardware description language (e.g. Verilog HDL or VHDL), or other suitable programming languages. The software (or firmware) that can perform the related functions may be arranged as any known computer-accessible media such as magnetic tape (magnetic tapes), semiconductor (semiconductors) memory, magnetic disk (magnetic disks) or optical disk (compact disks such as CD-ROM or DVD-ROM), or may be transmitted through the Internet (Internet), wired communication, wireless communication or other communication media. The software (or firmware) may be stored in a computer accessible medium such that programming codes of the software (or firmware) are accessed/executed by a processor of the computer. In addition, the apparatus and method of the present invention may be implemented by a combination of hardware and software.
In summary, the photographing system 100 and the operation method thereof according to the embodiments of the present invention can photograph different runners in a road running path to obtain a plurality of photographs. The photographing system 100 may perform a face recognition operation on the photos to group the photos into a plurality of photo groups. In addition, the photographing system 100 can read the rfid tags worn by the runners to obtain the time information of the runners passing through a specific location. The photographing system 100 can immediately establish the corresponding relationship between the photos and the runners according to the time information. In some embodiments, the processing apparatus 130 may actively transmit the corresponding photo (or the network connection information of the corresponding photo) of a runner to the mobile communication device of the runner. Therefore, the runner can watch the photo of the runner in real time.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited to the embodiments disclosed, but rather, may be embodied in many other forms without departing from the spirit or scope of the present invention.