Login

Lost your password?
Don't have an account? Sign Up

Advanced Computer Vision with Python – Full Course

Learn advanced computer vision using Python in this full course. You will learn state of the art computer vision techniques by building five projects with libraries such as OpenCV and Mediapipe. If you are a beginner, don't be afraid of the term advance. Even though the concepts are advanced, they are not difficult to follow.

✏️ This course was developed by Murtaza Hassan. Check out his Murtaza's Workshop YouTube Channel:

? Get the code here:

? Learn to build computer vision mobile apps from Murtaza:

⭐️ Course Contents ⭐️
⌨️ (0:00:00) Intro
⌨️ (0:01:18) Chapter 1 – Hand Tracking – Basics
⌨️ (0:26:57) Chapter 1 – Hand Tracking – Module
⌨️ (0:49:20) Chapter 2 – Pose Estimation – Basics
⌨️ (1:08:25) Chapter 2 – Pose Estimation – Module
⌨️ (1:28:25) Chapter 3 – Face Detection – Basics
⌨️ (1:52:38) Chapter 3 – Face Detection – Module
⌨️ (2:12:55) Chapter 4 – Face Mesh – Basics
⌨️ (2:33:09) Chapter 4 – Face Mesh – Module
⌨️ (2:52:10) Project 1 – Gesture Volume Control
⌨️ (3:27:45) Project 2 – Finger Counter
⌨️ (4:05:43) Project 3 – AI Personal Trainer
⌨️ (4:52:55) Project 4 – AI Virtual Painter
⌨️ (6:01:26) Project 5 – AI Virtual Mouse

Learn to code for free and get a developer job:

Read hundreds of articles on programming:

And subscribe for new videos on technology every day:

https://www.educational.guru

43 comments

  1. Johannes Christensen

    I just wanted to add that in chapter 1, when transforming the HandTrackingMin to a module, the Hands class from mediapipe, expects an argument for the parameter called model_complexity (at least it did for me, it did not seem to be a problem in the video). This argument should be passed right after the max number of hands is being passed. However in the video that is where the detectionCon is passed. This threw a TypeError for me because the model_complexity could not handle the non-int value it was getting from the detectionCon argument. So if you get a TypeError when dealing with the Hands class, this could be the reason. Hope it helps!

  2. Umang Gupta

    If you are stuck at PoseEstimation Module, the pose parameters have been updated. Copy the below code and you may be fine:

    def __init__(self, mode=False, complexity=1, smooth=True, segmentation = False, smooth_segmentation = True ,detectionCon=0.5, trackCon=0.5):

    self.mode = mode
    self.complexity = complexity
    self.segmentation = segmentation
    self.smooth_segmentation = smooth_segmentation
    self.smooth = smooth
    self.detectionCon = detectionCon
    self.trackCon = trackCon

    self.mpDraw = mp.solutions.drawing_utils
    self.mpPose = mp.solutions.pose
    self.pose = self.mpPose.Pose(self.mode, self.complexity, self.smooth, self.segmentation, self.smooth_segmentation, self.detectionCon, self.trackCon)

  3. profsor500

    Hi!
    this course is generally really good and i got some new skills from it, but… you add an index column to hand landmarks but it’s unnecessary. Why? MediaPipe always return 21 points and its a list with length = 21, so if you want point nr 20, you just get it from list where index = 20.
    I also implement it at numpy arrays, these have great features which helt us write better and simpler code.

  4. Le Thinh Andrew

    Here is the solution if you are stuck at the file – handtrackingmodule (27:35) – this is because the lastest version have added one parameter called complexity – so we need to add them:

    just copy the code I’m sure that will work for you:
    def __init__(self, mode=False, maxHands=2, complexity = 1, detectionCon=0.5, trackCon=0.5):
    self.mode = mode
    self.maxHands = maxHands
    self.complexity = complexity
    self.detectionCon = detectionCon
    self.trackCon = trackCon

    self.mpHands = mp.solutions.hands
    self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.complexity,
    self.detectionCon, self.trackCon,)
    self.mpDraw = mp.solutions.drawing_utils

  5. David

    I am enjoying learning this stuff but I am looking forward to modules like Hand, Body, Head, Eyes or something. When we can just pip a module that has all this kind of code in it with points of attachment. I am sure someone will do it before long. It is like when computers first started up. At some point things advanced and before you knew it you did not have to write in ones and zeros. We are all busy typing the same code, instead of one person and then the rest of us just using it as an add-on

  6. Rahul Matolia

    In the gesture volume control program, you are changing the volume range [-65.25, 0] to percentage[0, 100] in a linear manner. But it does not match with the volume percentage shown by the speakers in windows. Does anyone know how to interpolate the volume range to percentage accurately, as done by windows volume control?

  7. SeungJong Yoon

    I don’t know why some people say this video is useless. It’s good samples for beginner. I have a master degree for computer vision and am working as computer vision and AI researcher. Computer vision is not that easy to explain all about it within 7 hours. But you can use it. This video is just a starting point.

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*