Advanced Computer Vision with Python – Full Course
Learn advanced computer vision using Python in this full course. You will learn state of the art computer vision techniques by building five projects with libraries such as OpenCV and Mediapipe. If you are a beginner, don't be afraid of the term advance. Even though the concepts are advanced, they are not difficult to follow.
✏️ This course was developed by Murtaza Hassan. Check out his Murtaza's Workshop YouTube Channel:
? Get the code here:
? Learn to build computer vision mobile apps from Murtaza:
⭐️ Course Contents ⭐️
⌨️ (0:00:00) Intro
⌨️ (0:01:18) Chapter 1 – Hand Tracking – Basics
⌨️ (0:26:57) Chapter 1 – Hand Tracking – Module
⌨️ (0:49:20) Chapter 2 – Pose Estimation – Basics
⌨️ (1:08:25) Chapter 2 – Pose Estimation – Module
⌨️ (1:28:25) Chapter 3 – Face Detection – Basics
⌨️ (1:52:38) Chapter 3 – Face Detection – Module
⌨️ (2:12:55) Chapter 4 – Face Mesh – Basics
⌨️ (2:33:09) Chapter 4 – Face Mesh – Module
⌨️ (2:52:10) Project 1 – Gesture Volume Control
⌨️ (3:27:45) Project 2 – Finger Counter
⌨️ (4:05:43) Project 3 – AI Personal Trainer
⌨️ (4:52:55) Project 4 – AI Virtual Painter
⌨️ (6:01:26) Project 5 – AI Virtual Mouse
—
Learn to code for free and get a developer job:
Read hundreds of articles on programming:
And subscribe for new videos on technology every day:
I just wanted to add that in chapter 1, when transforming the HandTrackingMin to a module, the Hands class from mediapipe, expects an argument for the parameter called model_complexity (at least it did for me, it did not seem to be a problem in the video). This argument should be passed right after the max number of hands is being passed. However in the video that is where the detectionCon is passed. This threw a TypeError for me because the model_complexity could not handle the non-int value it was getting from the detectionCon argument. So if you get a TypeError when dealing with the Hands class, this could be the reason. Hope it helps!
It really helped me also. Thank you
Thanks so much, i spent at least 30 minutes trying to fix this, it worked first try when I tried your method!
thanks helped a lot
This comment should be to the top in the comments section. I had the same problem.
Thank you so much!!!
This guy’s journey from simple OpenCV tutorials to this is incredible!!
This guy seems to be creating what we see in TVs
@Rifat Hasan Murtaza’s Workshop
What’s his channel’s name?
If you are stuck at PoseEstimation Module, the pose parameters have been updated. Copy the below code and you may be fine:
def __init__(self, mode=False, complexity=1, smooth=True, segmentation = False, smooth_segmentation = True ,detectionCon=0.5, trackCon=0.5):
self.mode = mode
self.complexity = complexity
self.segmentation = segmentation
self.smooth_segmentation = smooth_segmentation
self.smooth = smooth
self.detectionCon = detectionCon
self.trackCon = trackCon
self.mpDraw = mp.solutions.drawing_utils
self.mpPose = mp.solutions.pose
self.pose = self.mpPose.Pose(self.mode, self.complexity, self.smooth, self.segmentation, self.smooth_segmentation, self.detectionCon, self.trackCon)
thank you
Thank you so much ♥
The channel always has what I need. I’ve boosted my knowledge of c++ and python skills after watching some of the videos. and now BOOOOOOM a computer vision course!!
Can you please tell me all prerequisites of the course
@Dinia Adil I think that’s too complex for individual lvl. And there would be much less audience for that.
why are you learning C++??
I love this channel as well, however i’d prefer if they showed us computer vision algorithms rather than just tutorials on how to use already existing solutions
@A Mazlin game and stuffs man, it kinda smooth and not lagging
Thank you so much! It’s amazing how things are easy to do with OpenCV, Mediapipe, and… your explanations!
Can you please tell me all prerequisites of the course
Please continue what you are doing. As for myself, I can definitely say you are saving/improving lives ❤️❤️
Hi!
this course is generally really good and i got some new skills from it, but… you add an index column to hand landmarks but it’s unnecessary. Why? MediaPipe always return 21 points and its a list with length = 21, so if you want point nr 20, you just get it from list where index = 20.
I also implement it at numpy arrays, these have great features which helt us write better and simpler code.
If you want point 20 Im pretty sure you have to get index 19 because arrays start at 0, the first object id in an array is 0
Really fantastic work man! Thank you!
Please do a project on cure lane detection(for self driving cars)
Here is the solution if you are stuck at the file – handtrackingmodule (27:35) – this is because the lastest version have added one parameter called complexity – so we need to add them:
just copy the code I’m sure that will work for you:
def __init__(self, mode=False, maxHands=2, complexity = 1, detectionCon=0.5, trackCon=0.5):
self.mode = mode
self.maxHands = maxHands
self.complexity = complexity
self.detectionCon = detectionCon
self.trackCon = trackCon
self.mpHands = mp.solutions.hands
self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.complexity,
self.detectionCon, self.trackCon,)
self.mpDraw = mp.solutions.drawing_utils
Thank you brother
Thanks
Works!!!
Thank you so much
I love you dude, thanks. This comment should be pinned.
Amazing content and very easy to understand. Thank you guys
Murtaza Sir, excellent video. Loaded with tons tons of knowledge. No words to thank you for same. Keep it up Murtaza sir. Keep uploading videos.
I am enjoying learning this stuff but I am looking forward to modules like Hand, Body, Head, Eyes or something. When we can just pip a module that has all this kind of code in it with points of attachment. I am sure someone will do it before long. It is like when computers first started up. At some point things advanced and before you knew it you did not have to write in ones and zeros. We are all busy typing the same code, instead of one person and then the rest of us just using it as an add-on
In the gesture volume control program, you are changing the volume range [-65.25, 0] to percentage[0, 100] in a linear manner. But it does not match with the volume percentage shown by the speakers in windows. Does anyone know how to interpolate the volume range to percentage accurately, as done by windows volume control?
Thank you for this very extensive tutorial. Excellent work!
I just can’t begin to describe how incredible this channel is. You guys are amazing!
I don’t know why some people say this video is useless. It’s good samples for beginner. I have a master degree for computer vision and am working as computer vision and AI researcher. Computer vision is not that easy to explain all about it within 7 hours. But you can use it. This video is just a starting point.
I love this course, may God reward you all !
Thank you so much. Well done . this is an amazing course on how to use the mp library
Awesome course! I’m learning how AI is working right now and this was REALLY interesting. Thank you so much
ide name
Thank you for the work and the long time spent to make this video and bring us knowledge!