Here is a list of best coursera courses for machine learning.
As the first machine learning mooc course, this machine learning course provided by Stanford University and taught by Professor Andrew Ng, which is the best machine learning online course for everyone who want to learn machine learning. The content include:
- Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks).
- Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning).
- Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI).
This Machine Learning specialization provided by University of Washington, which provides a case-based introduction to the exciting, high-demand field of machine learning. Students will learn to analyze large and complex datasets, build applications that can make predictions from data, and create systems that adapt and improve over time. In the final Capstone Project, students will apply their skills to solve an original, real-world problem through implementation of machine learning algorithms. The Machine Learning specialization include 4 sub related courses:
This is the first course of the Machine learning specialization, which will make students get hands-on experience with machine learning from a series of practical case-studies. By the end of the course, student will be able to:
- Identify potential applications of machine learning in practice.
- Describe the core differences in analyses enabled by regression, classification, and clustering.
- Select the appropriate machine learning task for a potential application.
- Apply regression, classification, clustering, retrieval, recommender systems, and deep learning.
- Represent your data as features to serve as input to machine learning models.
- Assess the model quality in terms of relevant error metrics for each task.
- Utilize a dataset to fit a model to analyze new data.
- Build an end-to-end application that uses machine learning at its core.
- Implement these techniques in Python.
This is the second course of the Machine learning specialization. In this course, you will explore regularized linear regression models for the task of prediction and feature selection. You will be able to handle very large sets of features and select between models of various complexity. You will also analyze the impact of aspects of your data — such as outliers — on your selected models and predictions. To fit these models, you will implement optimization algorithms that scale to large datasets. Learning Outcomes: By the end of this course, you will be able to:
- Describe the input and output of a regression model.
- Compare and contrast bias and variance when modeling data.
- Estimate model parameters using optimization algorithms.
- Tune parameters with cross validation.
- Analyze the performance of the model.
- Describe the notion of sparsity and how LASSO leads to sparse solutions.
- Deploy methods to select between models.
- Exploit the model to form predictions.
- Build a regression model to predict prices using a housing dataset.
- Implement these techniques in Python.
This is the third course of the Machine learning specialization. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier.
This is the fourth course of the Machine learning specialization. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python.
This Mathematics for Machine Learning Specialization provided by Imperial College London, which let students learn about the prerequisite mathematics for applications in data science and machine learning. The Machine Learning specialization include 3 courses:
This is the first course of the Mathematics for Machine Learning Specialization. In this course on Linear Algebra we look at what linear algebra is and how it relates to vectors and matrices. Then we look through what vectors and matrices are and how to work with them, including the knotty problem of eigenvalues and eigenvectors, and how to use these to solve problems. Finally we look at how to use these to do fun things with datasets – like how to rotate images of faces and how to extract eigenvectors to look at how the Pagerank algorithm works.
This is the second course of the Mathematics for Machine Learning Specialization, which intended to offer an intuitive understanding of calculus, as well as the language necessary to look concepts up yourselves when you get stuck. Hopefully, without going into too much detail, you’ll still come away with the confidence to dive into some more focused machine learning courses in future.
This is the third course of the Mathematics for Machine Learning Specialization. This course introduces the mathematical foundations to derive Principal Component Analysis (PCA), a fundamental dimensionality reduction technique. We’ll cover some basic statistics of data sets, such as mean values and variances, we’ll compute distances and angles between vectors using inner products and derive orthogonal projections of data onto lower-dimensional subspaces. Using all these tools, we’ll then derive PCA as a method that minimizes the average squared reconstruction error between data points and their reconstruction. At the end of this course, you’ll be familiar with important mathematical concepts and you can implement PCA all by yourself.
This Advanced Machine Learning Specialization provided by National Research University Higher School of Economics and Yandex, which let students Deep Dive Into The Modern AI Techniques, and teach computer to see, draw, read, talk, play games and solve industry problems. The Machine Learning specialization include 7 courses:
1) Introduction to Deep Learning
2) How to Win a Data Science Competition: Learn from Top Kagglers
3) Bayesian Methods for Machine Learning
4) Practical Reinforcement Learning
5) Deep Learning in Computer Vision
6) Natural Language Processing
7) Addressing Large Hadron Collider Challenges by Machine Learning