1. Capture your own video, where upper body, head, and shoulders, are visible, a

1. Capture your own video, where upper body, head, and shoulders, are visible, a

1. Capture your own video, where upper body, head, and shoulders, are visible, and natural/realistic movements are performed. You may want to consider a video with smooth uncluttered background for easier processing.
2. Detect the head position in each frame, using an existing method; two most popular approaches are the one of Viola-Jones (see HW1) as well as Convolutional Neural Networks (you can find such pre-trained models for face detection). The position of the head will serve as a “guide” for the next step.
3. Detect and track shoulder lines either for each shoulder independently, for both shoulders with two lines, or for both shoulders with a single line. Your choice should be towards increased robustness.
4. Detect shoulder shrugging throughout the video. In your execution, the event should be clearly indicated in the corresponding frames with either text superimposed on the frames or change of color of the calculated shoulder lines. Shoulder shrugging should be performed at different “intensities”, i.e. from subtle to exaggerated.
The video you will use should have max. duration of 1 minute, and it should contain at least 8-10 events (shrugging) that should be sufficiently spaced apart in time.
SUBMIT:
(i) Matlab or Python code.
(ii) Demo (screen recording) of the execution. The head region should be marked with blobs or boxes, while the shoulders should be marked with colored line(s), and displayed superimposed on each frame of the video. (-3pts if not submitted)