feat: face and person detection samples#362
Conversation
Codecov Report
@@ Coverage Diff @@
## master #362 +/- ##
=======================================
Coverage 25.65% 25.65%
=======================================
Files 38 38
Lines 9135 9135
=======================================
Hits 2344 2344
Misses 6791 6791Continue to review full report at Codecov.
|
bcoe
left a comment
There was a problem hiding this comment.
this is looking reasonable to me, a few stylistic nits.
samples/analyze.v1p3beta1.js
Outdated
| // const path = 'Local file to analyze, e.g. ./my-file.mp4'; | ||
|
|
||
| // Reads a local video file and converts it to base64 | ||
| const file = await util.promisify(fs.readFile)(path); |
There was a problem hiding this comment.
if we're blocking any ways, I might be tempted to use fs.readFileSync, avoid the promisify step.
samples/analyze.v1p3beta1.js
Outdated
| const personAnnotations = | ||
| results[0].annotationResults[0].personDetectionAnnotations; | ||
|
|
||
| personAnnotations.forEach(personAnnotation => { |
There was a problem hiding this comment.
to avoid the forEach, and an extra layer of callbacks, you could do this:
for (const {segment, timestampedObjects} of tracks) {
console.info(segment, timestampedObjects);
}I've been tending to use this syntax rather than forEach these days, it's also nice because you can break out of the loop with break.
There was a problem hiding this comment.
Done, here and elsewhere.
Thank you for showing me this syntax!
|
Thank you for the review! |
| 'use strict'; | ||
|
|
||
| async function detectPerson(path) { | ||
| //[START video_detect_person_beta] |
| // [END video_detect_person_beta] | ||
| } | ||
| async function detectPersonGCS(gcsUri) { | ||
| //[START video_detect_person_gcs_beta] |
| } | ||
| } | ||
| } | ||
| // [END video_detect_person_beta] |
There was a problem hiding this comment.
video_detect_person_gcs_beta
There was a problem hiding this comment.
Acknowledged.
| console.log(`\tAttribute: ${name}; `); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
// [END video_detect_faces_beta]
| console.log(`\tAttribute: ${name}; `); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
// [END video_detect_faces_gcs_beta
|
Couple style things:
|
| const results = await operation.promise(); | ||
| console.log('Waiting for operation to complete...'); | ||
|
|
||
| // Gets annotations for video |
There was a problem hiding this comment.
Add a note that we get the first result because only 1 video is processed.
| videoContext: { | ||
| personDetectionConfig: { | ||
| // Must set includeBoundingBoxes to true to get poses and attributes. | ||
| includeBoundingBoxes: true, |
There was a problem hiding this comment.
You don't use the bounding boxes in the output, is it worth addding?
There was a problem hiding this comment.
The bounding boxes are necessary to get the person and face detection segments but don't need to be read.
Reading bounding boxes is a pretty common task for the Video Intelligence API. On one hand, developers should be familiar with it already and this sample is really more about the attributes. On the other hand, it is a common task for using the API and maybe a good thing to include anyway.
I don't have a strong feeling one way or the other about bounding boxes, besides that adding them makes this sample longer. What is your preference?
| if (segment.startTimeOffset.seconds === undefined) { | ||
| segment.startTimeOffset.seconds = 0; | ||
| } | ||
| if (segment.startTimeOffset.nanos === undefined) { | ||
| segment.startTimeOffset.nanos = 0; | ||
| } | ||
| if (segment.endTimeOffset.seconds === undefined) { | ||
| segment.endTimeOffset.seconds = 0; | ||
| } | ||
| if (segment.endTimeOffset.nanos === undefined) { | ||
| segment.endTimeOffset.nanos = 0; | ||
| } |
There was a problem hiding this comment.
@telpirion, does this ever happen?
If so, that seems like an issue we need to raise.
There was a problem hiding this comment.
It can happen, yes. Sometimes the result from the API simply doesn't return a value and thus the property is undefined.
TBH, I'm following the pattern demonstrated here (among other places):
https://github.com/googleapis/nodejs-video-intelligence/blob/master/samples/analyze.v1p2beta1.js#L42-L61
|
Thank you for the review! A couple follow-up questions:
The existing beta samples use just "analyze.v1p2beta1.js.". The file includes multiple samples (and it's what I used as an example). Does that need to be refactored? |
🤖 I have created a release \*beep\* \*boop\* --- ## [2.7.0](https://www.github.com/googleapis/nodejs-video-intelligence/compare/v2.6.5...v2.7.0) (2020-02-13) ### Features * face and person detection samples ([#362](https://www.github.com/googleapis/nodejs-video-intelligence/issues/362)) ([cff2f36](https://www.github.com/googleapis/nodejs-video-intelligence/commit/cff2f36a4e6294908a4e26587ed840c1ec1b03f8)) ### Bug Fixes * adds spaces to region tags, other fixes ([#369](https://www.github.com/googleapis/nodejs-video-intelligence/issues/369)) ([2b6943e](https://www.github.com/googleapis/nodejs-video-intelligence/commit/2b6943ee0685761a0076c7b8023eed4f12f93d64)) * fixes face and people detection region tags ([#367](https://www.github.com/googleapis/nodejs-video-intelligence/issues/367)) ([ab039b5](https://www.github.com/googleapis/nodejs-video-intelligence/commit/ab039b56b3bea27edf93c4db7c97241599d38419)) * refactors person and face detection samples into separate files ([#370](https://www.github.com/googleapis/nodejs-video-intelligence/issues/370)) ([eb9b400](https://www.github.com/googleapis/nodejs-video-intelligence/commit/eb9b400c24bdf306d8263ec402922b3235754034)) * updates README file with correct links ([#371](https://www.github.com/googleapis/nodejs-video-intelligence/issues/371)) ([fb2701a](https://www.github.com/googleapis/nodejs-video-intelligence/commit/fb2701a81c7476ef06ab279a8d4572f006abe831)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please).
These code samples add the following region tags:
These code samples will be the canonicals.